17

10th edition of AI Safety Camp

ActiveGrant
$65,483raised
$313,000funding goal

Project summary

AI Safety Camp is a program with a 6-year track record of enabling people to find careers in AI Safety.
We support up-and-coming researchers outside the Bay Area and London hubs.

We are out of funding. To make the 10th edition happen, fund our stipends and salaries.

What are this project's goals and how will you achieve them?

AI Safety Camp is a program for inquiring how to work on ensuring future AI is safe, and try concretely working on that in a team.

For the 9th edition of AI Safety Camp we opened applications for 29 projects.

We are first to host a special area to support “Pause AI” work. With funding, we can scale from 4 projects for restricting corporate-AI development to 15 projects next edition.

We are excited about our new research lead format, since it combines:

  1. Hands-on guidance: We guide research leads (RLs) to carefully consider and scope their project. Research leads in turn onboard teammates and guide their teammates through the process of doing new research. 

  2. Streamlined applications: Team applications were the most time-intensive portion of running AI Safety Camp. Reviewers were often unsure how to evaluate an applicant’s fit for a project that required specific skills and understandings. RLs usually have a clear sense of who they would want to work with for three months. So we instead guide RLs to prepare project-specific questions and interview their potential teammates.

  3. Resource-efficiency: We are not competing with other programs for scarce mentor time. Instead, we prospect for thoughtful research leads who at some point could become well-recognized researchers. The virtual format also cuts on overhead – instead of sinking funds into venues and plane tickets, the money goes directly to funding people to focus on their work in AI safety. 

  4. Flexible hours: Participants can work remotely from their timezone alongside their degree or day job – to test their fit for an AI Safety career. 

How will this funding be used?

We are fundraising to pay for:

  • Salaries for the organisers for the current AISC 

  • Funding future camps (see budget section)


Whether we run the tenth edition, or put AISC indefinitely on hold depends on your donation.

Last June, we had to freeze a year's worth of salary for three staff. Our ops coordinator had to leave, and Linda and Remmelt decided to run one more edition as volunteers.

AISC has previously gotten grants paid for by FTX money. After the FTX collapse, we froze $255K in funds to cover clawback claims. For the current AISC, we have $99K left from SFF that was earmarked for stipends – but nothing for salaries, and nothing for future AISCs.

If we have enough money we might also restart the in-person version of AISC. This decision will also depend on an ongoing external evaluation of AISC, which among other things, is evaluating the difference in impact of the virtual vs in-person AISCs.

By default we’ll decide what to prioritise with the funding we get. But if you want to have a say, we can discuss that. We can earmark your money for whatever you want.


Potential budgets for various versions of AISC

These are example budgets for different possible versions of the virtual AISC. If our funding lands somewhere in between, we’ll do something in between.

Virtual AISC - Budget version 

Software etc
$2K
Organiser salaries, 2 ppl, 4 months
$56K
Stipends for participants
$0

Total $58K

In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.

Salaries are calculated based on $7K per person per month.

Virtual AISC - Normal version 

Software etc
$2K
Organiser salaries, 3 ppl, 6 months
$126K
Stipends for participants
$185K

Total $313K

For the non-budget version, we have one more staff and more paid hours per person, which means we can provide more support all-round. 

Stipends estimated based on: $185K = $1.5K/research lead * 40 + $1K/team member * 125
Number of research leads (40) and team members (125) are guesses based on how much we think AISC will grow.

Who is on your team and what's your track record on similar projects?

We have run AI Safety Camp over five years, covering 8 editions, 74 teams, and 251 participants.

We iterated a lot, based on participant feedback. We converged on a research lead format we are excited about. We will carefully scale this format with your support.

As researchers ourselves, we can meet potential research leads where they are at. We can provide useful guidance and feedback in almost every area of AI Safety research.

We are particularly well-positioned to support epistemically diverse bets.


Organisers

Remmelt – coordinator of "Stop/Pause AI"

  • Remmelt collaborates with an ex-Pentagon engineer and prof. Roman Yampolskiy on fundamental controllability limits. Both researchers are funded by the Survival and Flourishing Fund.

  • Remmelt works with diverse organisers to restrict harmful AI scaling, including: 
    Pause AI, creative professionals, anti-tech-solutionists, product safety experts, and climate change researchers.

  • At AISC, Remmelt wrote a comprehensive outline of the control problem, presented here.

  • Remmelt previously co-founded EA Netherlands and ran national conferences.


Linda - coordinator of "everything else"

  • After completing her physics PhD, Linda interned at MIRI and later joined the Refine fellowship.

  • Linda has a comprehensive understanding of technical AI Safety landscape. An autodidact, she studies the theory of agent foundations, cognitive neuroscience and mechanistic interpretability.

    • Several researchers (eg. at MIRI) noted that Linda picks up on new theoretical arguments surprisingly fast, even where the inferential distance is long.

  • At AISC, Linda co-published RL in Newcomblike Environments, selected for a NeurIPS spotlight presentation.

  • Linda initiated and spearheaded AI Safety Camp, AI Safety Support, and Virtual AI Safety Unconference.


Track record

AI Safety Camp is primarily a learning-by-doing training program.
People get to try a role and explore directions in AI safety, by collaborating on a concrete project.

Multiple alumni have told us that AI Safety Camp was how they got started in AI Safety.
AISC topped the ‘average usefulness’ list in Daniel Filan’s survey.

Papers that came out of the camp include:

Projects started at AI Safety Camp went on to receive a total of $613K in grants:
  AISC 1:  Bounded Rationality team    
$30K from Paul
  AISC 3:  Modelling Cooperation
$24K from CLT, $50K from SFF, $83K from SFF, $83K from SFF
  AISC 4:  Survey      
$5K from LTTF
    AISC 5:  Pessimistic Agents      
$3K from LTFF
    AISC 5:  Multi-Objective Alignment
$20K from EV
  AISC 6:  LMs as Tools for Alignment
$10K from LTFF
  AISC 6: Modularity
$125K from LTFF
  AISC 7:  AGI Inherent Non-Safety
$170K from SFF
    AISC 8:  Policy Proposals for High-Risk AI     
$10K from NL

Organizations launched out of camp conversations include:

  • Arb Research, AI Safety Support, and AI Standards Lab.

Alumni went on to take positions at:

  • FHI (1 job+4 scholars+2 interns), GovAI (2 jobs), Cooperative AI (1 job), Center on Long-Term Risk (1 job), Future Society (1 job), FLI (1 job), MIRI (1 intern), CHAI (2 interns), DeepMind (1 job+2 interns), OpenAI (1 job), Anthropic (1 contract), Redwood (2 jobs), Conjecture (3 jobs), EleutherAI (1 job), Apart (1 job), Aligned AI (1 job), Leap Labs (1 founder, 1 job), Apollo (2 founders, 4 jobs), Arb (2 founders), AISS (2 founders), AISL (2+ founders), ACS (2 founders), ERO (1 founder), BlueDot (1 founder)

    These are just the positions we know about. Many more are engaged in AI Safety in other ways, eg. as PhD or independent researcher.

    Update: Both of us now consider positions at OpenAI net negative and we are seriously concerned about positions at other AGI labs.

For statistics of previous editions, see here. We also recently commissioned Arb Research to run alumni surveys and interviews to carefully evaluate AI Safety Camp's impact.

What are the most likely causes and outcomes if this project fails? (premortem)

  • Not receiving minimum funding.

    • There are now fewer funders.

    • The evaluator who evaluated us last round at SFF and LTFF was too busy.
      His guess, he replied, was that he was not currently super interested in most of the projects we found RLs for, and not super interested in the "do not build uncontrollable AI" area.

    • We look for epistemically diverse bets. We are known for being honest in our critiques when we think individuals or areas of work are mistakenly overlooked. We spent little time though on networking and clarifying our views to funders, which unfortunately led to the current situation.

  • Receiving funding, but not enough to cover an ops staff member.

    • Linda and Remmelt are researchers themselves, and a little worn out from running operations. Funding for a third staff member would make the program more sustainable.

  • Not being selective enough of projects.

    • We want to focus more time on inquiring with potential research leads about their cruxes and evaluating their plans. This round, we were volunteering, so we had to satisfice. We rejected ⅓ of proposals for "do not build uncontrollable AI" and ⅕ of proposals for "everything else".

  • Receiving fewer applicants overall because of competition with new programs.

    • Team applications have been steady though per year (229 for '22; 219 for '23; 222 for '24).

  • Lacking the pipeline to carefully scale up "do not build uncontrollable AI" work.

    • Given Remmelt's connections, we are the best-positioned program to do this.

What other funding are you or your project getting?

No other funding sources.

remmelt avatar

Remmelt Ellen

4 months ago

Progress update

Many thanks to the proactive donors who supported this fundraiser! It got us out of a pickle, giving us the mental and financial space to start preparing for edition 10.

Last week, we found funds to cover backpay plus edition 10 salaries. There is money left to cover some stipends for participants from low-income countries, and a trial organiser to help evaluate and refine the increasing number of project proposals we receive.

That said, donations are needed to cover the rest of the participant stipends, and runway for edition 11. If you could continue to reliably support AI Safety Camp, we can reliably run editions, and our participants can rely on having covered some living costs while they do research.

P.S. Check out summaries of edition 9 projects here. You can also find the public recordings of presentations here.


🐸

Robinson

8 months ago

Hi Remmelt. I want to link a comment to one of your previous projects. It was useful for me at the time, and may be useful for others who feel an urge to fund this one, because I think some of the points are still valid. I recommend reading the entire discussion and following the links pointing to his tweets https://manifund.org//projects/facilitate-lawsuits-to-restrict-corporate-ai-scaling?tab=comments#138a167d-2742-7b83-96f5-6fdced9b4319

remmelt avatar

Remmelt Ellen

8 months ago

@Holly, thanks for sharing. Always happy to discuss these things.

donated $100
🍍

Changbai Li

8 months ago

AISC was my first opportunity to work with and know folks in this effort. 2 years in, and I'm realizing how small this field still is and how important it is to keep pipelines like AISC available.

remmelt avatar

Remmelt Ellen

8 months ago

Thank you for the donation, and also sharing your experience!

donated $5,000
🐔

Ravi Parikh

10 months ago

Efforts to get more people into AI Safety seem high leverage relative to most other things

Austin avatar

Austin Chen

10 months ago

Approving this project as in line with our work on AI Safety! I think this is a pretty compelling writeup, and a few people who I trust are vouching for the organizers.

Notably, Remmelt and Linda made an excellent fundraising appeal on EA Forum -- they were very successful at turning past successes into a concrete appeal for funding, drawing in donations from many members of the EA community, rather than a single large donations from established funders. I'm very happy that Manifund can help with this kind of diversified fundraising. (I also appreciate that Linda has written up recommendations for other projects she finds compelling, including some on our site!)

donated $15,000
🐔

R

10 months ago

Seems important and underfunded!

donated $15,010
🦄

Alexander Mont

10 months ago

Just adding a little bit more to get it over the line

donated $15,010
🦄

Alexander Mont

10 months ago

I'm excited to see more people going into AI Safety! This seems like a great way to keep the pipeline going with a relatively small amount of money.

remmelt avatar

Remmelt Ellen

10 months ago

@Alex319, thank you for the thoughtful consideration, and for making the next AI Safety Camp happen!

As always, if you have any specific questions or things you want to follow up on, please let us know.

donated $3,000
🌳

Adam Yedidia

10 months ago

Looks promising! I hope you get funded.

remmelt avatar

Remmelt Ellen

10 months ago

@adamyedidia, thank you for the donation. We are getting there in terms of funding, thanks to people like you.

We can now run the next edition

donated $100
zeshen avatar

Chin Ze Shen

11 months ago

I participated in the last AISC and it was great.

remmelt avatar

Remmelt Ellen

11 months ago

@zeshen, thank you for the contribution!

I also saw your comment on the AI Standards Lab being launched out of AISC.
I wasn't sure about the counterfactual, so that is good to know.

donated $2,000
IsaakFreeman avatar

Isaak Freeman

11 months ago

Without any in-depth evaluation, I think this is worth a signal-boost and a seed grant. I’d be disappointed to see AISC stop!

remmelt avatar

Remmelt Ellen

11 months ago

@IsaakFreeman, thank you, appreciating you daring to be a first mover here.