Funding requirements

Reach min funding
Get Manifund approval
ProposalGrant
Closes December 1st, 2024
$100raised
$5,000minimum funding
$452,000funding goal

Offer to donate

10 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Apart Research is a non-profit research and community-building lab that aims to reduce AI risk through technical research and talent acceleration. We engage thousands of people around the globe in AI safety research through our Apart Sprints research hackathons and our Apart Lab research fellowship.

  • Apart Hackathons: 3-day research hackathons on impactful AI safety topics with senior mentors, speakers, and judges designed to test participants' talent fit, pilot original research, and connect with like-minded collaborators.

  • Apart Lab: A program to guide aspiring researchers toward peer-reviewed AI safety research, providing a network of talented researchers, resources and training, compute, research mentorship, and project management. Our fellows have published their research in major venues, such as NeurIPS, ACL, and ICLR.

For example: Our Code Red Hackathon, co-hosted with METR in March this year, helped 168 participants engage directly with a novel research agenda, accelerated METR's work (via 230 new evaluation ideas, 108 specifications, and 28 implementations), and led to ten new fellows in Apart Lab. They all contributed challenge implementations for METR's task standard during the hackathon, and are now (6 months later) finishing up five original research projects in Apart Lab, with three still in progress and two already accepted at the NeurIPS workshops System 2 Reasoning At Scale ("CryptoFormalEval") and EvalEval 2024 ("Rethinking CyberSecEval"). One of the team members describes the impact  as follows: "Apart's massively accelerated my progress, offering me the support to carry out meaningful research". Of the ten, six are now transitioning from careers in tech and cybersecurity to AI safety. Four are still studying.

Your funding will support our growth of the above approach in 2025: Letting more than 2,000 participants' test their fit for AI safety research, supporting over 200 lab fellows, and establishing more impact-oriented research collaborations, such as the one above.

What are this project's goals and how will you achieve them?

Apart aims to reduce catastrophic risk from AI through impactful technical work that makes AGI policy, governance, and deployment safer and more trustworthy. We do this by empowering new research and new researchers at scale through our Apart Sprint research hackathons and our Apart Lab fellowship. Specifically, our goals are to:

Why Apart?

Apart complements existing AI safety organizations (FAR AI, MATS, CAIS, etc.) through our:

  • Research hackathons: We have not seen events like our hackathons anywhere. Our hackathons are designed for the participants to create pilot experiments during the weekend instead of prior to the weekend as with academic workshops. This means we can 1) get fast results on the latest topics, 2) create weekends of high counterfactual value to participants, and 3) meritocratically (based on the reviews from our hackathon judges) invite Apart Lab fellows from the hackathons. Submissions are open-ended and reviewed by a jury of established researchers in the field.

  • Fellowship structure: Promising hackathon teams are invited to our 4-6 month Apart Lab fellowship where they work part-time through a four stage structured process to publish a peer-reviewed paper (or other equally impactful deliverable) as lead authors. We impose a weekly meeting, review and evaluate deliverables for each stage, and connect them with advisors, among much else.

  • Agenda: We aim to enable broad exploration of empirical research in impactful domains of technical AI safety research. With our research hackathons and fellowship, we develop a multitude of AI safety pilot experiments. We collaborate with competent partners to provide expert discussion and guidance to specific teams as they develop their work into impactful publications (or other deliverables).

  • Impact per dollar: A research team can publish a paper at just $7,266 in direct costs, including compute, conference participation, and staff support—controlled for a team's probability of publication success. Additionally, we increase our counterfactual impact per dollar by supporting as many teams' conference participation, compute, and stipends as possible with research grants independent of the core funding you support with this grant. Our hackathons cost between $2,000 and $12,000 to run but are often subsidized by our collaborators similarly to the Lab projects.

  • Research volume: We are able to solve specific research problems that require larger volume, such as LLM evaluation tasks, control techniques, or demonstrations. Our programs have led to more than 350 research projects submitted during hackathons and 13 peer-reviewed publications.

  • Target demographic: We target senior professionals who are highly competent in technical domains relevant to AI safety. Our programs are designed to appeal to action-oriented individuals who would like a direct outcome and might not be able to fully relocate or dedicate all their time to career exploration.

  • Global focus: We have hosted hackathons with partners across all inhabited continents and support talented researchers globally. Our approach discovers talent from underserved demographics and based on merit in the hackathons, disregarding CVs. This crucially improves the diversity of viewpoints in AI safety and selects for teams that are able to iterate fast.

How will this funding be used?

We are seeking funding to support our upcoming growth during December 2024 through March 2025. With recent additions of top talent, this funding will help us capitalize on our current momentum and potential.

Our total funding need for this period is $452k, broken down as follows:

  • Staff ($284k – 63% of total):

    • Salaries for core team, advisors, contractors, and staff development.

  • Publication and Events Costs ($46k – 10% of total costs):

    • Travel for conference presentations (core team and Apart Lab fellows), hackathon development, and team events.

  • Research and Administrative Costs ($83k – 18% of total):

    • Compute resources, office space, fiscal sponsorship fees, equipment, software subscriptions, and other services.

  • Buffer/Contingency ($39k – 9% of total):

    • Allocated for unforeseen program and personnel expenses.

Why Fund Apart Now?

  • We offer exceptionally high return on investment

    • direct costs per Apart Lab fellow average ~$3,303 and can be close to zero, by helping fellows pursue direct research grants

    • hackathons activate participants at as low as $30 per person (even excluding sponsorships!)

  • We are in a unique position to accelerate global talent and act as a connector in the AI safety ecosystem

    • 20+ global AI safety research hackathons with more than 2,100 participants all over the globe, 350+ project entries and an NPS of 64

    • Impactful collaboration partners such as METR, Anthropic, the Cooperative AI Foundation, Apollo Research, Entrepreneur First, and FAR AI

  • Our approach is validated and ready for scaling

    • 2x growth of Apart Lab batch size the past 6 months from 17 fellows, 7 projects per quarter (Q1 2024) to 35 fellows, 11 projects per quarter (Q3 2024)

    • 13 peer-reviewed papers published since 2023 (6 main conference papers, 9 workshop acceptances), including at NeurIPS, ICLR, ACL, EMNLP.

    • Significant career benefits reported from fellows and hackathon participants, incl. positions at FAR.ai, Cooperative AI foundation, Lionheart ventures, and more. 

  • Given current AGI development timelines, we have an urgent need to maintain momentum and expand impactful AI safety research, a place where we believe Apart has now become a safe bet

Who is on your team and what's your track record on similar projects?

Our team at Apart Research comprises experienced professionals in AI safety, research, and community building:

  • Jason Schreiber (Hoelscher-Obermaier) (Co-Director): PhD in Physics, AI safety expertise from ARC Evals (METR) and PIBBSS, 5 years as AI research engineer. Leads Apart Lab, guiding 70+ fellows across 25 research projects in 2024

  • Esben Kran (Co-Director): AI safety researcher and entrepreneur. Co-founded Apart Research in 2022. Organized 20+ global hackathons with 1,800+ participants and 340+ project entries.

  • Natalia Pérez-Campanero Antolín (Research Project Manager): PhD from Oxford, project management experience from Royal Society, where she supported 100+ entrepreneurs. Supervises research teams and develops support infrastructure.

  • Archana Vaidheeswaran (Community Program Manager): Leadership experience at Women Who Code and Women in Machine Learning. Has organized events for over 2,000 participants. Designs the community experience, coordinates our global hackathons, and improves participant engagement.

Our team (8 FTE total) also includes 

  • a research communications specialist supporting dissemination of hackathon and lab outcomes

  • a research acceleration engineer helping our community scale up their experiments

  • two research assistants

  • one funding and operations associate

  • as well as an extensive network of senior research advisors and mentors.

Track Record

  1. AI safety research output

  1. Fellowship Growth.

  • 2x growth in past 6 months from 17 fellows, 7 projects (Q1 2024) to 35 fellows, 11 projects (Q3 2024)

    • Testimonials from fellows highlighting our impact:

      "If I land a job in AI Safety, it will be because of Apart Lab's help." — Philip Quirke, Apart Lab fellow, hired as a Project Manager at FAR AI

      “I participated in a few Apart Hackathons that led me to apply to LASR Labs in London. I met a lot of people as part of working on sprints and was introduced to a lot of pertinent topics within AI safety through them. The paper that we produced as part of LASR Labs is published and was accepted at a workshop at NeurIPS. I'm also working on a research project as part of the Apart Fellowship and we're looking to publish soon.” – Nora Petrova, Staff AI Researcher at Prolific and current Apart Lab fellow

      “[Apart hackathons] helped me learn about mech interp for the first time, find a collaborator and write my first research blog post. I now work full time as a mech interp researcher, and Apart hackathons substantially accelerated / partly caused this happening.” – Joseph Miller, hackathon participant and current MATS scholar

  • Global Engagement

    • 20+ global AI safety research hackathons, 2,100+ participants, 350+ project entries.

    • NPS = 64: Average score of 8.9 out of 10;  impact on their decision to pursue AI safety careers at 5.7 out of 10.

    • Our hackathons can be the spark for pursuing AI safety careers:

      “Today, because of Apart Research's initial spark, I'm thriving in a full-time AI safety research role, having left my engineering position behind after graduation.  From being a curious engineering student to a published researcher with a dedicated mentor, my career trajectory has been utterly transformed in just a few months. I'm eternally grateful for the comprehensive preparation, collaborative atmosphere, and invaluable connections that Apart Research fostered.”
      Chandler Smith, Hackathon Participant, now research engineer at the Cooperative AI Foundation

  • Collaborations

    • Partners include METR, Anthropic, the Cooperative AI Foundation, Apollo Research, Entrepreneur First, and FAR AI.

    • Sponsors include METR, Apollo Research, Noema, Future of Life Institute, Flashbots, CeSIA, Entrepreneur First and Sage.

    • Collaborators include Beth Barnes, Neel Nanda, Bo Li, Haydn Belfield, Christian Schroeder de Witt, Lewis Hammond, Sam Watts, Marius Hobbhahn, Eric Ho, Emma Bluemke, Ian McKenzie, Alex Pan, and Alice Gatti, among many others.

What are the most likely causes and outcomes if this project fails? (premortem)

  1. Insufficient net-positive impact on AI safety

  • Mitigation:

    • Rigorous project selection, focusing on frontier agendas applicable to AI risk reduction.

    • Collaborate with aligned partners and advisors to improve project impact.

    • Maintain a responsible disclosure policy to avoid info hazards.

  • Measurement: Track adoption of our research by key AI safety institutions and organizations.

  1. Limited career impact for participants

  • Mitigation:

    • Strategically focus on improving the career transition follow-up.

    • Connect hackathon participants and lab fellows with established researchers and orgs through events, advising, co-authoring opportunities, and participation in academic conferences

    • Focus on strengthening researchers' resumes through peer-reviewed publication and improved dissemination.

  • Measurement: Track post-program placement rates and long-term career trajectories.

  1. Scaling challenges

  • Mitigation:

    • Build on our validated approach to positive impact for participants and technical projects.

    • Prioritize direct impact in new interventions for our fellowship and hackathon programs.

    • Diversify revenue sources to support strategic growth.

  • Measurement: Monitor key performance indicators (e.g., publications per fellow, accepted grants per project, cost per outcome) during scaling.

Given our strong track record, we believe that with careful attention to these potential risks and the implementation of appropriate mitigation strategies, Apart Research has a high likelihood of continued success.

How much money have you raised in the last 12 months, and from where?

In the past 12 months, we've raised over $500,000 from a combination of sources including LTFF, Foresight Institute, and ACX Grants.

Brok avatar

ADAM

17 days ago

Could you briefly clarify why AI safety matters that much and that we don't even have the machine which is the source of all potential risks? All the best.