RenanAraujo avatar
Renan Araujo

@RenanAraujo

regrantor

Research Manager at Rethink Priorities

https://araujorenan.com/

Donate

This is a donation to this user's regranting budget, which is not withdrawable.

Sign in to donate
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

I’m deeply motivated to figure out the most effective ways to improve humanity’s well-being.

As part of this mission, I work on reducing existential risk as a research manager at Rethink Priorities. I also cofounded and directed Condor Camp, a project to support talented Brazilians to help with this vision.

Previously, I worked on criminal justice reform and human rights. I did research, led a nonprofit, and worked in a criminal law court. I studied a master’s in Criminal Justice Policy at LSE, and Law at the Federal University of Pernambuco.

Outgoing donations

Comments

RenanAraujo avatar

Renan Araujo

over 1 year ago

Why I’m excited about this project

  • Translations are a robust, low-risk, and relatively low-cost way to spread useful information to untapped audiences and regions.

  • Open Phil  previously funded several translation grants but moved away from funding part-time and/or individual translators to fund more easily scalable, larger efforts (more here). Luan was one of the translators who didn’t get such funding. 

  • I haven’t read Luan’s translations, but he seems generally high-context, smart, and knowledgeable in English. Since he already did similar translation work before, I trust his ability to deliver well in this project.

  • Since Luan reached out to me, it was almost zero cost for me to find this well-positioned translator. And, different from OP, I only have a small regranting pot anyway.

  • Luan’s translations seem unusually useful since they already have a pretty well-defined target audience: newly-founded AI safety study groups in top Brazilian universities.

Challenges and concerns

  • Maybe it’d be more cost-effective to fund a translator for Spanish or some other more popular language than Portuguese. I’d be keen to hear from folks doing that kind of work and for Manifund to support them, especially if OP isn’t.

    • But I expect the most popular languages to be correlated with OP support. I think I’d have expected even Portuguese to receive that kind of support, but maybe the lack of a large, professional translation service for EA work in Portuguese is the bottleneck here. 

  • I’m not sure translating the AGISF curriculum is the best option, considering I’d expect most of the target audience for that to read English relatively well and because I think there’s some nontrivial chance the curriculum will be updated relatively often (caveat: I haven’t checked this with Bluedot). 

    • But I updated after seeing the information Luan collected on people finding reading in Portuguese better for leaning, and I can’t think of an alternative resource that is as good for an introductory-level audience.

RenanAraujo avatar

Renan Araujo

over 1 year ago

I like the idea of having hubs in various places, especially when they're visa-friendly and geographically close to potentially unstable regions. Based on Austin's comment below, this project seems to have been valuable to at least two people I also respect (even though I didn't ask them directly), which seems like a good enough proof-of-concept for a small grant.

I'd be keen to hear more about:

1) Other people who use or have used it, and what they thought about the experience

2) How many people you expect to serve (based on e.g., trends in previous applicants, expansion plans etc)

RenanAraujo avatar

Renan Araujo

over 1 year ago

See Joel's comment below for context.

RenanAraujo avatar

Renan Araujo

over 1 year ago

Please see Joel's comment below for more context (our outreach to recommended researchers for an 'Aurora Scholarship' program.)

I decided to donate 2.4k because that was my original funding target for this project, and it looks like this can productively accommodate more funding than the original value considering Zhonghao and Neel's comments. I'm not donating more (I'd potentially go up to 3-4k based on my intuitive reading of the comments below) because I expect to support other participants of the 'Aurora Scholarship'.

RenanAraujo avatar

Renan Araujo

over 1 year ago

Main points in favor of this grant

  • This project’s targets a relatively neglected area and has a compelling theory of change (that received feedback from various other experienced groups in the field during XST’s research process).

  • The team seems well-placed to plan and execute this project (selected based on a systematic hiring process) and the support system to be provided by RP has a strong track record in similar projects (Condor Camp, ERA)

  • Since I (Renan) will have governance responsibilities for this project, I will be able to more closely oversee and steer the project so that it is more likely to be impactful than most grantees.

  • Separate from the potential direct impact, the information value from this project should be high. The x-risk space does not have other examples of founder search and pairing via formal hiring + dedicated support and this could inform future efforts at active grantmaking in the space.

  • This grant seems time-sensitive, since the project is due to start in early Jan and other grantmakers are not able to turn around funding by then. RP has limited capacity to fund the entirety of this project by itself at the moment, and this grant would fill a relevant gap.

Process for deciding amount

$25K is an estimate of the funding necessary to cover a pilot event (including venue booking, logistics, contractors, etc.)

Conflicts of interest

Please disclose e.g. any romantic, professional, financial, housemate, or familial relationships you have with the grant recipient(s).


1. The project is being incubated and fiscally sponsored by Rethink Priorities, which fiscally sponsors my employer, IAPS. I will play a role in the governance of the project along with my manager (Peter Wildeford) and RP’s Special Project team. 

2. The project idea came from my previous team’s (XST) research.

However, the founding team is independent, will own the IP associated with this project, and is only employed as contractors through RP’s fiscal sponsorship program. Ultimately, if successful, this project will likely spin off as an independent project. My involvement is likely only to occur during this initial pilot stage in an advisory capacity.

Additionally, despite RP receiving the funding due to the fiscal sponsorship scheme, this funding will be completely restricted to this project and Coby and Aishwarya, and can’t be used for any other purpose (i.e., can’t be used by RP for other RP-related expenses).

RenanAraujo avatar

Renan Araujo

over 1 year ago

Thanks for the ping, @joel_bkr. This seems like an interesting opportunity! I have some questions that would help me better assess this opportunity, and plausibly help others too.

However, since answering these questions might take a lot of your time, I just want to say upfront that based on the info above I think I'm unlikely to fund this opportunity and, if I'd fund it, I'd probably be closer to <5k rather than 10k.

Here you go:

  1. I'm confused about the focus of the organization: is it on x-risk, global development, or both?

    You mention the focus of the fellowship is on AI safety, pandemic preparedness, and great power conflict at the beginning of the post. However, on the list of host-led projects you just have organizations focused on global development-related issues. On the participant-led projects you mention a focus on "Mountaintop will select candidates serving a lower-income community in any country outside of the United States." Maybe orgs like CSR and 1DS would be closer to the x-risk camp, but it's unclear to me what your partnership with these orgs looks like and I don't see references to any AI safety orgs.

  2. For the x-risk-focused, I'd be curious to hear more about the organizations you'd partner with for placements and examples of projects you'd like to see. (Feel free to ping me (renan@rethinkpriorities.org) if you wouldn't like to share this publicly to avoid candidates gaming the selection process.)

    This would make a difference in how impactful I think this opportunity is – to be frank, at first glance the current list of partners for the leadership institute doesn't strike me as particularly exciting considering it's mostly focused on leadership-related skills (that I think can often be learned elsewhere) rather than neglected abilities or object-level expertise. However, I think it's impressive that you've partnered with several prestigious organizations, so I'm keen to learn more about your ability to partner with x-risk-relevant orgs.

  3. I'd love to see more concrete end and intermediate goals. Examples:

    1. How does this program reduce catastrophic risks?

      (Plausibly via great placements in relevant organizations, but my impression is that most of the orgs currently listed are not incredibly transformative NGOs or government agencies. So maybe the impactful placements would happen after the fellowship?

    2. After people go through the leadership institute, what do you expect to change? How will you know you're on track?

      The timeline is quite long (end of 2025 is the target for this first iteration), so I'm interested in how you'll track progress throughout.

    3. After people go through the placement, how will you track success?

RenanAraujo avatar

Renan Araujo

over 1 year ago

TLDR: I decided to regrant $12k to this project. I’m excited about an organized AI safety training program in an under-exposed, important region (Southeast Asia). I think the core team seems promising and worth the investment, despite their juniority. I think getting experienced mentors will be the main challenge (among others), but I think the team is aware of the relevant failure modes and taking the steps necessary to mitigate them. I’d be excited about others donating at least 23k more to this project to make their MVP possible

Why I’m excited about this project

  • I’m keen to see new programs happen outside the main hub as a way to widen the surface area of opportunities for talented folks to engage with AI safety. Southeast Asia is one of the regions I’m most excited about due to its large population and geopolitical relevance.

  • The core team seems organized and promising. They’re quite junior, but seem worth the investment as a way of skilling up by doing. This seems relevant to me especially considering there aren’t other groups trying to fill this gap, as far as I can tell – this project can plausibly allow them to become the experienced folks that would guide others in the future.

  • To mitigate their juniority, they’re picking a research agenda that has a track record of being useful for getting people interested in AI safety research + develop useful skills for AIS-relevant work. They’re also explicitly inserting their project into a broader pipeline, and establishing sensible metrics of success (e.g., participants winning Alignment Jams, getting into SERI MATS).

  • I had a call with Brian months ago and another with the whole team today. They gave me some more details about how they’re planning to skill themselves up and mitigate some of the concerns I mention here and Gaurav mentioned above. This made me even more confident about this grant.

Challenges and concerns 

  • I’m concerned about their ability to provide high-quality mentorship to program participants considering their juniority and potential limitations around getting senior mentors involved

    • They haven’t heard back from some relevant people yet (e.g., Neel Nanda), and haven’t run similar programs in the past.

    • I’d be keen for someone with experience in this kind of program to share their expectations about how this will go.

    • However, as I mentioned above, this seems like a positive bet in expectation to me.

  • Creating talent that will end up doing capabilities research

    • Mech interp is quite dual-use and being the first program in the region to skill people up in this might end up hyping capabilities rather than AIS.

    • However, I think a) they’re sufficiently aware of this failure mode, and b) AI is sufficiently mainstream I expect this failure mode not to have a big counterfactual downside.

  • I worry they won’t find sufficiently talented people, and that investing in talent around existing hubs might be more cost-effective.

    • I think this applies to all projects aimed at field-building outside hubs. However, I think we (as a community) haven’t invested enough in experimenting in programs like these yet – so the information value by itself seems worth it, in case there are low-hanging fruits available considering a lot of talented people can’t effectively go to the main hubs due to e.g., visa limitations. I’ve been more excited about this due to my experience doing talent search via Condor Camp (Brazil) over the last 1.5 year.

  • As junior folks without a strong track record, I worry they might not be skilled enough to run an entire org/project by themselves. Maybe they won’t be able to follow through their plans.

    • I think they individually have enough experience in their fields that makes me confident about betting on them. Particularly, in my interactions with Brian, he’s seemed quite organized and competent, and I’ve appreciated his work at CEA and setting up EA Philippines. I know less of the other two team members, but at first glance they seem to have complimentary skillsets and experience.

RenanAraujo avatar

Renan Araujo

over 1 year ago

@briantan @GauravYadav just wanted to say this discussion was useful for me, thanks for bringing up those points and for the responses!

RenanAraujo avatar

Renan Araujo

over 1 year ago

@kiko7 thanks! Ultimately, I decided to not evaluate this since I don't feel confident about having the right background for that. I incentivize others with a more technical background to evaluate this grant.

RenanAraujo avatar

Renan Araujo

over 1 year ago

I'd be curious about:

  1. Some examples of successful projects that counterfactually came out of CEEALAR.

  2. Your expected success rate based on the last 5 years (or maybe the last year, since this might be more representative of future success rate). By success rate I mean something like "impactful projects counterfactually coming out of CEEALAR that are above your bar of net value worth 1.4k".

  3. Any examples of net negative projects or value that happened as a product of CEEALAR? Eg maybe someone being based in Blackpool who could instead have spent that time somewhere else, or some community health problem stemming out of many EAs living and working together.

RenanAraujo avatar

Renan Araujo

over 1 year ago

Interesting project! I'm curious about a couple of things:

  1. What would the research agenda be like, most likely? (Eg what you think would be the most exciting version and the realistic version)

  2. How many people do you expect would work on that agenda and what would their background be on? (Eg would they already have an alignment-related background, just technical folks interested in the field, PhD students or faculty, etc)

RenanAraujo avatar

Renan Araujo

over 1 year ago

  • TLDR: I’m contributing to this project based on a) my experience running one of their hackathons in April 2023, which I thought was high quality, and b) my excitement to see this model scaled, as I think it has an outsized contribution to the talent search pipeline for AI safety. I’d be interested in seeing see someone with a more technical background evaluating the quality of the outputs, but I don’t think that’s the core of their impact here.

  • My experience with Apart

    • I ran an AI governance hackathon as part of my work at Condor Camp, an organization that does AI safety talent search focused on Brazilian university students.

    • We were experimenting with models different than the camp that could potentially be more cost-effective. We considered doing a hackathon, but the overhead of coming up with good questions, finding speakers/mentors, and overall putting together the intellectual infrastructure would have been too much for us. Apart Research solved that bottleneck.

    • We ran a hackathon (technically, an ideathon since it was on governance) with ~30 participants. This helped some Condor Camp alumni keep engaged with AI safety beyond the camp, and ultimately “reactivated” one of them who wasn’t working with AI governance. That participant led a team, received an honorary prize, and ended up using the hackathon project to get into a CHERI fellowship.

  • Why I’m excited about scaling this model

    • I think Apart’s approach solves a bottleneck for community builders/talent searchers, making a somewhat robust model (hackathon-type competitions) much more easily accessible.

    • It also reduces the cost of intellectual work for everyone: mentors and evaluators work on the best projects from all over the world, instead of having to repeat the work for every single iteration.

    • As a result, I expect them to help the community broaden our surface area considerably, finding talent more cost-effectively.

  • Challenges and concerns

    • While I’m excited about the talent search and engagement aspect, and the AI governance questions seemed relevant and on point, I’m less sure the quality of the output itself.

    • Perhaps focusing too much on publishing the output is a suboptimal move, considering it might not be that high quality. On the other hand, they’re getting external feedback from conferences and peer reviewed journals, so this might easily be an upside rather than a concern. Also, it’s a great incentive for participants to have the expectation of seeing their work published.

    • I’d be interested in takes from folks with more technical expertise about the quality the outputs. So far, my impression is that they at the very least have pretty good evaluators like Neel Nanda (it’d be useful to have evaluators listed on their website, if possible)

    • I didn’t donate more than 5k because that’s 10% of my pot and I’m expecting to donate to other opportunities where I’ll have more counterfactual impact. But I wanted to chip in here to potentially incentivize others to donate to Apart.

Transactions

ForDateTypeAmount
Translation of BlueDot Impact's AI alignment curriculum into Portugueseabout 1 year agoproject donation3000
BAIS (ex-AIS Hub Serbia) Office Space for (Frugal) AI Safety Researchersover 1 year agoproject donation1100
Field building in universities for AI policy careers in the USover 1 year agoproject donation25000
Explainer and analysis of CNCERT/CC (国家互联网应急中心)over 1 year agoproject donation1500
<aa7c88dc-7311-4577-8cd3-c58a0d41fc31>over 1 year agoprofile donation+1500
<aa7c88dc-7311-4577-8cd3-c58a0d41fc31>over 1 year agoprofile donation1500
Mapping neuroscience and mechanistic interpretability over 1 year agoproject donation2400
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation12000
Run five international hackathons on AI safety researchover 1 year agoproject donation5000
Manifund Bankover 1 year agodeposit+50000