RenanAraujo avatar
Renan Araujo



Research Manager at Rethink Priorities
$33,000total balance
$33,000charity balance
$0cash balance

$0 in pending offers

About Me

I’m deeply motivated to figure out the most effective ways to improve humanity’s well-being.

As part of this mission, I work on reducing existential risk as a research manager at Rethink Priorities. I also cofounded and directed Condor Camp, a project to support talented Brazilians to help with this vision.

Previously, I worked on criminal justice reform and human rights. I did research, led a nonprofit, and worked in a criminal law court. I studied a master’s in Criminal Justice Policy at LSE, and Law at the Federal University of Pernambuco.

Outgoing donations


RenanAraujo avatar

Renan Araujo

3 months ago

Thanks for the ping, @joel_bkr. This seems like an interesting opportunity! I have some questions that would help me better assess this opportunity, and plausibly help others too.

However, since answering these questions might take a lot of your time, I just want to say upfront that based on the info above I think I'm unlikely to fund this opportunity and, if I'd fund it, I'd probably be closer to <5k rather than 10k.

Here you go:

  1. I'm confused about the focus of the organization: is it on x-risk, global development, or both?

    You mention the focus of the fellowship is on AI safety, pandemic preparedness, and great power conflict at the beginning of the post. However, on the list of host-led projects you just have organizations focused on global development-related issues. On the participant-led projects you mention a focus on "Mountaintop will select candidates serving a lower-income community in any country outside of the United States." Maybe orgs like CSR and 1DS would be closer to the x-risk camp, but it's unclear to me what your partnership with these orgs looks like and I don't see references to any AI safety orgs.

  2. For the x-risk-focused, I'd be curious to hear more about the organizations you'd partner with for placements and examples of projects you'd like to see. (Feel free to ping me ( if you wouldn't like to share this publicly to avoid candidates gaming the selection process.)

    This would make a difference in how impactful I think this opportunity is – to be frank, at first glance the current list of partners for the leadership institute doesn't strike me as particularly exciting considering it's mostly focused on leadership-related skills (that I think can often be learned elsewhere) rather than neglected abilities or object-level expertise. However, I think it's impressive that you've partnered with several prestigious organizations, so I'm keen to learn more about your ability to partner with x-risk-relevant orgs.

  3. I'd love to see more concrete end and intermediate goals. Examples:

    1. How does this program reduce catastrophic risks?

      (Plausibly via great placements in relevant organizations, but my impression is that most of the orgs currently listed are not incredibly transformative NGOs or government agencies. So maybe the impactful placements would happen after the fellowship?

    2. After people go through the leadership institute, what do you expect to change? How will you know you're on track?

      The timeline is quite long (end of 2025 is the target for this first iteration), so I'm interested in how you'll track progress throughout.

    3. After people go through the placement, how will you track success?

RenanAraujo avatar

Renan Araujo

3 months ago

TLDR: I decided to regrant $12k to this project. I’m excited about an organized AI safety training program in an under-exposed, important region (Southeast Asia). I think the core team seems promising and worth the investment, despite their juniority. I think getting experienced mentors will be the main challenge (among others), but I think the team is aware of the relevant failure modes and taking the steps necessary to mitigate them. I’d be excited about others donating at least 23k more to this project to make their MVP possible

Why I’m excited about this project

  • I’m keen to see new programs happen outside the main hub as a way to widen the surface area of opportunities for talented folks to engage with AI safety. Southeast Asia is one of the regions I’m most excited about due to its large population and geopolitical relevance.

  • The core team seems organized and promising. They’re quite junior, but seem worth the investment as a way of skilling up by doing. This seems relevant to me especially considering there aren’t other groups trying to fill this gap, as far as I can tell – this project can plausibly allow them to become the experienced folks that would guide others in the future.

  • To mitigate their juniority, they’re picking a research agenda that has a track record of being useful for getting people interested in AI safety research + develop useful skills for AIS-relevant work. They’re also explicitly inserting their project into a broader pipeline, and establishing sensible metrics of success (e.g., participants winning Alignment Jams, getting into SERI MATS).

  • I had a call with Brian months ago and another with the whole team today. They gave me some more details about how they’re planning to skill themselves up and mitigate some of the concerns I mention here and Gaurav mentioned above. This made me even more confident about this grant.

Challenges and concerns 

  • I’m concerned about their ability to provide high-quality mentorship to program participants considering their juniority and potential limitations around getting senior mentors involved

    • They haven’t heard back from some relevant people yet (e.g., Neel Nanda), and haven’t run similar programs in the past.

    • I’d be keen for someone with experience in this kind of program to share their expectations about how this will go.

    • However, as I mentioned above, this seems like a positive bet in expectation to me.

  • Creating talent that will end up doing capabilities research

    • Mech interp is quite dual-use and being the first program in the region to skill people up in this might end up hyping capabilities rather than AIS.

    • However, I think a) they’re sufficiently aware of this failure mode, and b) AI is sufficiently mainstream I expect this failure mode not to have a big counterfactual downside.

  • I worry they won’t find sufficiently talented people, and that investing in talent around existing hubs might be more cost-effective.

    • I think this applies to all projects aimed at field-building outside hubs. However, I think we (as a community) haven’t invested enough in experimenting in programs like these yet – so the information value by itself seems worth it, in case there are low-hanging fruits available considering a lot of talented people can’t effectively go to the main hubs due to e.g., visa limitations. I’ve been more excited about this due to my experience doing talent search via Condor Camp (Brazil) over the last 1.5 year.

  • As junior folks without a strong track record, I worry they might not be skilled enough to run an entire org/project by themselves. Maybe they won’t be able to follow through their plans.

    • I think they individually have enough experience in their fields that makes me confident about betting on them. Particularly, in my interactions with Brian, he’s seemed quite organized and competent, and I’ve appreciated his work at CEA and setting up EA Philippines. I know less of the other two team members, but at first glance they seem to have complimentary skillsets and experience.

RenanAraujo avatar

Renan Araujo

3 months ago

@briantan @GauravYadav just wanted to say this discussion was useful for me, thanks for bringing up those points and for the responses!

RenanAraujo avatar

Renan Araujo

3 months ago

@kiko7 thanks! Ultimately, I decided to not evaluate this since I don't feel confident about having the right background for that. I incentivize others with a more technical background to evaluate this grant.

RenanAraujo avatar

Renan Araujo

4 months ago

I'd be curious about:

  1. Some examples of successful projects that counterfactually came out of CEEALAR.

  2. Your expected success rate based on the last 5 years (or maybe the last year, since this might be more representative of future success rate). By success rate I mean something like "impactful projects counterfactually coming out of CEEALAR that are above your bar of net value worth 1.4k".

  3. Any examples of net negative projects or value that happened as a product of CEEALAR? Eg maybe someone being based in Blackpool who could instead have spent that time somewhere else, or some community health problem stemming out of many EAs living and working together.

RenanAraujo avatar

Renan Araujo

4 months ago

Interesting project! I'm curious about a couple of things:

  1. What would the research agenda be like, most likely? (Eg what you think would be the most exciting version and the realistic version)

  2. How many people do you expect would work on that agenda and what would their background be on? (Eg would they already have an alignment-related background, just technical folks interested in the field, PhD students or faculty, etc)

RenanAraujo avatar

Renan Araujo

4 months ago

  • TLDR: I’m contributing to this project based on a) my experience running one of their hackathons in April 2023, which I thought was high quality, and b) my excitement to see this model scaled, as I think it has an outsized contribution to the talent search pipeline for AI safety. I’d be interested in seeing see someone with a more technical background evaluating the quality of the outputs, but I don’t think that’s the core of their impact here.

  • My experience with Apart

    • I ran an AI governance hackathon as part of my work at Condor Camp, an organization that does AI safety talent search focused on Brazilian university students.

    • We were experimenting with models different than the camp that could potentially be more cost-effective. We considered doing a hackathon, but the overhead of coming up with good questions, finding speakers/mentors, and overall putting together the intellectual infrastructure would have been too much for us. Apart Research solved that bottleneck.

    • We ran a hackathon (technically, an ideathon since it was on governance) with ~30 participants. This helped some Condor Camp alumni keep engaged with AI safety beyond the camp, and ultimately “reactivated” one of them who wasn’t working with AI governance. That participant led a team, received an honorary prize, and ended up using the hackathon project to get into a CHERI fellowship.

  • Why I’m excited about scaling this model

    • I think Apart’s approach solves a bottleneck for community builders/talent searchers, making a somewhat robust model (hackathon-type competitions) much more easily accessible.

    • It also reduces the cost of intellectual work for everyone: mentors and evaluators work on the best projects from all over the world, instead of having to repeat the work for every single iteration.

    • As a result, I expect them to help the community broaden our surface area considerably, finding talent more cost-effectively.

  • Challenges and concerns

    • While I’m excited about the talent search and engagement aspect, and the AI governance questions seemed relevant and on point, I’m less sure the quality of the output itself.

    • Perhaps focusing too much on publishing the output is a suboptimal move, considering it might not be that high quality. On the other hand, they’re getting external feedback from conferences and peer reviewed journals, so this might easily be an upside rather than a concern. Also, it’s a great incentive for participants to have the expectation of seeing their work published.

    • I’d be interested in takes from folks with more technical expertise about the quality the outputs. So far, my impression is that they at the very least have pretty good evaluators like Neel Nanda (it’d be useful to have evaluators listed on their website, if possible)

    • I didn’t donate more than 5k because that’s 10% of my pot and I’m expecting to donate to other opportunities where I’ll have more counterfactual impact. But I wanted to chip in here to potentially incentivize others to donate to Apart.