3

AI Safety Research Organization Incubator - Pilot Program

ActiveGrant
$15,977raised
$635,000funding goal

Project summary

Catalyze aims to run a 3-month program to incubate new AI Safety research organizations. In our pilot cohort, we want to focus on incubating technical research organizations and host up to 20 participants in London (ideally at the LISA offices) for 2 of those months in Q1/Q2 2024. 

The program would bring together strong AI Safety researchers and engineers with talented entrepreneurs/generalists with a strong track record. The program would help them: 

  1. Find a complementary cofounder.

  2. Gather additional knowledge and skills they need through an educational component.

  3. Get access to our support network of expert mentors and advisors, a seed funding network, and many networking opportunities.


Why this program?

After prioritizing between dozens of AI safety field-building interventions, we found this intervention particularly promising. We believe it has the potential to address an important bottleneck for increasing much-needed outputs of high-quality AI safety research: there seem to be too few open positions to work on AI safety research. This leads to the current situation where many talented researchers and engineers with an AI safety focus resort to either independent research or non-AI safety jobs. Furthermore, senior talent from other fields is not attracted into the AI safety field as much as it could with good job openings (independent research is not a good option if you have a mortgage and family). Existing organizations do not seem to be scaling up quickly enough to address these problems, and we shouldn’t expect that to change in the near future. We gather that this is often because scaling is hard and not always the best move for a given organization.

We are in conversation with a number of promising individuals who would either like to or already have set up an organization and asked them about their hurdles and which support structures they'd want to see. With this program, we aim to address these issues and help create more AI safety organizations. 

This is a very condensed version of our reasoning, feel free to reach out via alexandra@catalyze-impact.org if you’d like to discuss things further.

What are this project's goals and how will you achieve them?

Program overview

When: Starting Q1 2024, a 3-month program.

Where: 1 month online, 2 months in person, ideally at London initiative for Safe AI (LISA).

Who: up to 20 outstanding technical and/or entrepreneurial individuals who deeply care about AI safety and have a strong track record (e.g. founding/scaling a previous successful organization, having published well-regarded research).

What

  • Month 1: online lectures & workshops, including from expert guest speakers (e.g. leaders of research organizations). The goals are:

    • Teach crucial skills in building AI safety research organizations.

    • The lectures form a basis for assignments which help to 1) Further develop the research organization proposals, and 2) Test cofounder fit with the other participants by working together.

  • Month 2 & 3: in-person in London. Participants finish up the formation of cofounder teams and start building their organizations whilst receiving support and opportunities.

    • Continued workshops & testing of cofounder fit.

    • Ongoing support from us and referrals to our network of mentors and advisors (e.g. business advice, scientific advice, legal advice).

    • Networking opportunities within the AI safety ecosystem.

    • Fundraising opportunities: towards the end of the program, participants get to present their proposals to our network of seed funders.

Theory of Change - summary graphic

Please take a look at our Theory of Change graphic for this program for a more visual overview of the goals we have for this project and which actions we plan to take to get there.


Theory of Change - description

Our goals: ultimately, we aim to reduce the risk of severely negative outcomes from advanced AI. We hope to do this by increasing the number of new AI safety research organizations with strong founding teams, a promising approach to AI safety, and the right resources. These are opportunities for large scale, well coordinated and well funded collaborative research, and a larger diversity in approaches to the AI alignment problem.


To achieve this, the outputs we strive for are:

  • A cohort of strong participants, part of whom bring promising research directions to set up an organization around.

  • Participants test their cofounding fit through various collaboration projects.

  • Participants have the necessary skills and knowledge to start an AI safety research organization.

  • Participants have a strong AI safety network.

    • They have access to a network of advisors and mentors they can easily interact with.

    • They have access to a seed funding network.

    • They are integrated in the London AI Safety Ecosystem.

Concrete inputs that we hope will lead to these outcomes are: 

  • Outreach to encourage suitable potential participants to apply to the program.

  • Thorough vetting to identify the most promising applicants and the strength of the research directions they propose.

    • Selection criteria participants:

      • For all participants: ambitiously altruistic, value-aligned, easy to work with.

      • Technical/researcher participants: good academic or engineering track record, promising proposal for type of research organization to start (as determined by an external selection board), aptitude for research management, strong understanding of AI safety. 

      • Entrepreneurial generalist participants: project management experience (~ 2+ years), strong problem-solving abilities, ideally a good entrepreneurship track record (e.g. started a successful company).

    • Selection of proposed research directions:

      • We will make a pre-selection of promising applications which we show to a panel of experts with diverse opinions. 

      • We plan to judge proposals on the following dimensions:

        • Clear path to impact.

        • Scalability.

        • How promising does our panel judge the proposal to be.

  • We offer participants a training program which helps prepare them for founding their AI safety research organization. This includes assignments which help to test working together.

    • E.g. workshops from experts on building a theory of change, best practices of research management, fundraising for AI Safety research 101, setting up a hiring pipeline to find great researchers, etc.

  • Outreach to potential advisors and mentors for the participants.

  • Outreach to potential funders to join a seed funding network which the participants can meet with towards the end of the program.

  • We bring participants into the London AI safety ecosystem throughout the program.

    • Which contains Conjecture, Apollo, LISA, SERI MATS, ARENA, Blue Dot Impact, UK Task Force.

Indicators of success

  • The pilot program has attracted as participants at least 8 of our target researchers and 8 of our target entrepreneurs.

  • At least 30% of incubated organizations will each have raised > 1M $ in funding within a year.

  • Incubated organizations publish/conduct good AI safety research within the first two years - e.g. at least 4 research papers that get presented at a top-tier ML conference like ICML, ICLR, NeurIPS.

  • >80% of program graduates would recommend others to join the program (i.e. average Net Promoter Score of >8).

  • At least 3/5 participants find a cofounder through the program who they still work with 4 months after the program ended.

  • Through the program, participants meet on average at least 20 new contacts in the AI safety space they’d be comfortable asking for a favor.

If you would like to learn more details on how we plan to approach certain things or have input, feel free to reach out via alexandra@catalyze-impact.org.

How will this funding be used?

High-level breakdown

$ 149 K: Accommodation costs participants (2 months in London)

$ 146 K: 3-Month stipend for 20 participants

$ 139 K: Catalyze team salaries (8 months runway, 4 FTE salaries)

$ 41 K: Renting office and venue space for the program (2 months)

$ 24 K: Travel costs participants

$ 20 K: Travel & admin costs Catalyze team (8 months)

$ 18 K: Daily lunch during the in-person part of the program (2 months)

$ 15 K: Costs related to guest lecturers

$ 83 K: Buffer of 15%

Total: $ 635 K

Please look here for a more detailed breakdown

What difference will different levels of funding make? 

Receiving the first $15 - 60K in funding would provide us with a crucial runway, enabling us to continue to fully dedicate our attention to this project. This increases the chance of us being able to fundraise enough to run a proper version of this program. It would also create space for us to execute on more Minimum Viable Products (MVPs) that can help us build towards the more comprehensive pilot.

In the spirit of cost-effectiveness, we have considered what trade-offs we expect are fine to make vs. which methods of cutting costs would seriously reduce the expected impact of the program. The version of the program tied to the budget above reflects these judgments. If we in total receive less funding than our fundraising goal, but at least $260K, we would try to run a version of the program might differ in one or several of the following ways:

  • Running more of the program online rather than in-person.

    • Though we believe having more of the program in-person would aid with assessing cofounder fit, collaborating more efficiently while building the organization, and will help tremendously with building participants’ AI safety network through exposure to the ecosystem.

  • Reducing the number of participants (e.g. 16 instead of 20).

    • Though reducing the number of participants would likely decrease the percentage of participants who find a cofounder because there would be less choice. We think this could have a strong negative effect on the number of successful organizations coming out of the program.

  • Running the program for 2 months instead of 3.

    • Though we think this gives participants a very tight runway to prepare for their first fundraising round, which makes joining the program a more uncertainty-heavy move. This might deter participants who need more financial certainty (such as more senior people with a mortgage and family).

  • We could ask participants to arrange and pay for their own accommodation.

    • Though, given that the stipends we have in mind are already quite low (2400$ per month), we think that asking participants to pay for their own accommodation in London could make a portion of them unable to join the program. Additionally, finding affordable accommodation in London can be challenging and time-consuming, again possibly stopping some of our preferred participants from joining.

Who is on your team and what's your track record on similar projects?

Our core team currently consists of three members. All three have an entrepreneurial background and are highly value-driven and impact-focused.

  • Kay Kozaronek has past experience in business consulting and technical AI Safety research, having participated in SERI MATS and cofounded the AI Safety research group Cadenza Labs. He is currently also a part of the Incubation Track of the Future Academy

  • Gábor Szórád brings 15 years of operational and entrepreneurial experience to the organization, having led teams from 5 to 8,200 in leadership roles across Europe, Asia, and the Middle East. Since the beginning of 2023, he has focused full-time on upskilling and volunteering in AI Safety.

  • Alexandra Bos initiated Catalyze, and has done AI Safety field-building prioritization research over the past months to figure out what interventions could have an outsized impact in the field. She has set up and led a non-profit in the past (TEDxLeidenUniversity), and was amongst the top ~3% in Charity Entrepreneurship’s selection process last year (pool >1000).

  • We intend to include more people on the executive team in preparation for the program. 

Next to the executive team, there are also board members of the Catalyze Impact Foundation who co-decide on important matters:

  • Stan van Wingerden is the COO/CFO of Timaeus, a recently founded AI safety research organization dedicated to making breakthrough fundamental progress on technical AI Alignment. Their research focus is on Developmental Interpretability Singular Learning Theory. Previously Stan was the CTO at an Algorithmic Trading Fund, an ML Researcher, and studied Theoretical Physics & Philosophy.

  • Magdalena Wache is an independent technical AI Safety researcher, previously at MATS. She has a Master's degree in Machine Learning, has organized the AI Safety Europe Retreat, and was involved in founding the European Network for AI safety.

In addition, we are putting together an advisory board. The current members of our advisory board are:

  • Ryan Kidd is the co-director of MATS, a board member and cofounder of the London Initiative for Safe AI (LISA), and a Manifund regrantor. Previously he completed a PhD in Physics at the University of Queensland (UQ), where he ran UQ’s Effective Altruism student group for three years.

  • Jan-Willem van Putten is the cofounder and EU AI Policy Lead at Training For Good. He is an impact-focused serial entrepreneur, having founded and led several other enterprises as well. He leverages his management consulting and for-profit experience to create positive societal change through innovative training programs. 

What are the most likely causes and outcomes if this project fails? (premortem)

[ Medium - high likelihood ]

  • A significant portion of the incubated organizations fail to have a big impact, for example because:

    • Cofounder fit is not good enough.

      -> To reduce this risk, one of our selection criteria will be for participants to be ‘easy to work with’. Furthermore, we will pay close attention in our selection process to get together a group of complementary participants. For better odds of finding a match, we also will try our best to fundraise for a cohort with more participants rather than a very small cohort.

    • Their research agendas/plans for the organizations do not turn out to be as useful as expected.

    • Lack of (AI safety) ambition in the cofounders.

      • -> To reduce this risk, one of our selection criteria will be for participants to be value-aligned, i.e. care strongly about reducing the worst risks from AI.

    • Incubated organizations do not find enough funding, either immediately after the program, or a few years after being founded.

      • -> To reduce this risk, we are getting together a seed funding circle. We are currently already in touch with several potential members about this. We also consider assisting incubatees with responsibly tapping into VC funds.

[ Medium likelihood ]

  • We are not able to fundraise enough for an ambitious version of this program, which could lead to us:

    • 1) Not being able to attract high caliber more senior talent that we expect would create the best organizations through this program.

    • 2) Not being able to create an environment with the highest odds of success for participants (e.g. running a larger part of the program online rather than in-person).

  • The money spent on this program has less impact than it would have had when spent on different AI Safety field-building projects or existing AI safety organizations (which we could consider as a counterfactual).

  • With the program, we will have taken time from a number of talented and important individuals that would have been spent better elsewhere.

  • We attract less than 7 people who meet our standards for suitable technical participants with a promising proposal for research directions.

    • -> To reduce this risk, we are already on the look-out for and in conversation with promising prospective participants.

[ Low likelihood ]

  • We attract less than 7 suitable participants with entrepreneurial background to join the program.

    • -> To reduce this risk, we are already on the look-out for and in conversation with promising prospective participants.

  • We do not manage to select the most promising research organization proposals to incubate.

    • -> To reduce this risk, we’re working on setting up an external board of trusted experts to help us evaluate these applications.

  • We inadvertently incubate organizations that end up fast-tracking capabilities research over safety research.

    • -> To reduce this risk, we will:

      • Select for a strong impact focus & value-alignment in participants.

      • Assist the founders to set up their organization in a way that limits the potential for value drift to take over (e.g. a charter for the forming organization that would legally make this more difficult, helping them with vetting who you take on as an investor or board member or suggesting ones who we know have a strong AI safety focus).

What other funding are you or your project getting?

  • Lightspeed awarded Catalyze 30K in seed funding in August of this year.

Fundraising plans

  • We are in the process of applying to LTFF, SFF and OpenPhilanthropy.

  • We are planning to lower accommodation and office costs through potential partner- and sponsorships.

  • We are working on acquiring funding from a number of High Networth Individuals .

  • We are planning to approach the UK’s Foundation Model Taskforce / AI Safety Institute for funding as well.

Whenever possible, our goal is to fundraise money that counterfactually would not have been spent on x-risk- focused AI safety and/or EA-aligned projects.

AlexandraBos avatar

Alexandra Bos

9 months ago

Progress update

What progress have you made since your last update?

  • Fundraising: We have raised ~130K$ which enables us to run an adjusted version of the incubation program we originally proposed.

  • Supporting new AI Safety organizations: We executed on a number of support interventions for young AI safety research organizations. This includes a 1-week product design sprint with a new evals organization and ongoing consultancy sessions with various new AI Safety research org founders. In these sessions we help clients tackle main challenges they face and connect them to potential co-founders, funders or others. We also organized a number of networking events to support these founders (incl. an event with many of the main evals organizations around EAG Bay Area 2024).

  • Finding promising founders: We have looked for and found very promising founders to support & received ~130 expressions of interest for our upcoming program. Part of our strategy has been experimenting with AI Safety Entrepreneurship community-building, such as hosting a dinner with this theme around EAG London 2024 (~25 attendants), and hosting a Q&A event with three AI safety org founders (~40 attendants).

  • Preparing to launch upcoming programs: We have prepared for launching the upcoming incubation programs (incl. setting up an applicant selection pipeline, headhunting, gathering advisors for incubatees, designing program content, putting together resources) and plan to launch applications this June as soon as we wrap up our current hiring round. The program itself will likely start around August.

If either of our funders would like to hear more details on these activities, people or clients in a de-anonymized manner, we're happy to share this with them directly.

What are your next steps?

  • Wrapping up our hiring round & launching our upcoming programs.

Is there anything others could help you with?

  • Seed funding circle members: we'd be interested to get in touch with additional people who are interested in providing seed funding for the new AI Safety research organizations (either for- or non-profit). Please let us know if you are interested in this or know someone who might be.

donated $15,000
RyanKidd avatar

Ryan Kidd

about 1 year ago

Main points in favor of this grant

  1. I think that there should be more AI safety organizations to: harness the talent produced by AI safety field-building programs (MATS, ARENA, etc.); build an ecosystem of evals and auditing orgs; capture free energy for gov-funded and for-profit AI safety organizations with competent, aligned talent; and support a multitude of neglected research bets to aid potential paradigm shifts for AI safety. As an AI safety organization incubator, Catalyze seems like the most obvious solution.

  2. As Co-Director at MATS, I have seen a lot of interest from scholars and alumni in founding AI safety organizations. However, most scholars do not have any entrepeneurial experience and little access to suitable co-founders in their networks. I am excited about Catalyze's proposed co-founder pairing program and start-up founder curriculum.

  3. I know Kay Kozaronek fairly well from his time in the MATS Program. I think that he has a good mix of engagement with AI safety technical research priorities, entrepeneurial personality, and some experience in co-founding an AI safety startup (Cadenza Labs). I do not know Alexandra or Gábor quite as well, but they seem driven and bring diverse experience.

  4. I think that the marginal value of my grant to Catalyze is very high at the moment. Catalyze are currently putting together funding proposals for their first incubator program and I suspect that their previous Lightspeed funding might run low before they receive confirmation from other funders.

Donor's main reservations

  1. Alexandra and Kay do not have significant experience in founding/growing organizations and none of the core team seem to have significant experience with AI safety grantmaking or cause prioritization. However, I believe that Gábor brings significant entrepeneurial experience, and Jan-Willem and I, as advisory board members, bring significant additional experience in applicant selection. I don't see anyone else lining up to produce an AI safety org incubator and I think Alexandra, Kay, and Gábor have a decent chance at succeeding. Regardless, I recommend that Catalyze recruit another advisory board member with significant AI safety grantmaking experience to aid in applicant/project selection.

  2. It's possible that Catalyze's incubator program helps further projects that contribute disproportionally to AI capabilities advances. I recommend that Catalyze consider the value alignment of participants and the capabilities-alignment tradeoff of projects during selection and incubation. Additionally, it would be ideal if Catalyze sought an additional advisory board member with significant experience in evaluating dual-use AI safety research.

  3. There might not be enough high-level AI safety research talent available to produce many viable AI safety research organizations right away. I recommend that Catalyze run a MVP incubator program to assess the quality of founders/projects, including funder and VC interest, before investing in a large program.

Process for deciding amount

Alexandra said that $5k gives Catalyze one month of runway, so $15k gives them three months runway. I think that three months is more than sufficient time for Catalyze to receive funding from a larger donor and plan an MVP incubator program. I don't want Catalyze to fail because of short-term financial instability.

Conflicts of interest

  1. I am an unpaid advisor to Catalyze. I will not accept any money for this role.

  2. Kay was a scholar in MATS, the program I co-lead. Additionally, I expect that many potential participants in Catalyze's incubator programs will be MATS alumni. Part of MATS' theory of change is to aid the creation of further AI safety organizations and funders may assess MATS' impact on the basis of alumni achievements.

  3. Catalyze wants to hold their incubator program at LISA, an office that I co-founded and at which remain a Board Member. However, I currently receive no income from LISA and, as a not-for-profit entity, I have no direct financial stake in LISA's success. However, I obviously want LISA to succeed and believe that a potential collaboration with Catalyze might be beneficial.

My donation represents my personal views and in no way constitutes an endorsement by MATS or LISA.

donated $15,000
RyanKidd avatar

Ryan Kidd

about 1 year ago

How useful is $5-10k? What impact does this buy on the margin currently?

AlexandraBos avatar

Alexandra Bos

about 1 year ago

@RyanKidd thanks for asking!

Receiving the first $5K - 10 K in funding would help us gain very useful runway and resources to put towards better MVPs. We're pretty cheap right now; around $5K gives us 1 extra month of runway.

The resources would enable us to:

1) Continue to fully dedicate our attention to making this program a reality,

2) It would create space for us to execute on more MVPs that are helping us learn how to best shape the program, build a proof of concept, and build our track record in this niche.

In other words, this funding would increase the odds of us being able to run a more comprehensive pilot later in 2024.

The MVPs we are planning and partially already running consist of 1) supporting and coaching very early AI safety research organizations and 2) enabling people to find a cofounder for the AI safety research organization they want to set up.

AlexandraBos avatar

Alexandra Bos

about 1 year ago

To elaborate on this a bit and make it more precise:
The first 15-20K would go mostly towards runway and very cheap MVPs which don't require resources from us apart from our time (i.e. things we can run online).

Funding above the 15-20K and up to around 60K would go towards somewhat less cheap MVPs such as hosting an in-person mini version of the program for a few individuals who are in the early phases of starting their AI safety research organization.

donated $977
vincentweisser avatar

Vincent Weisser

over 1 year ago

Very excited about this effort, think it could have great impact, and personally know Kay and think he has a good chance to deliver on this with his team!