Could you briefly clarify why AI safety matters that much and that we don't even have the machine which is the source of all potential risks? All the best.
You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Apart Research is a non-profit research and community-building lab that aims to reduce AI risk through technical research and talent acceleration. We engage thousands of people around the globe in AI safety research through our Apart Sprints research hackathons and our Apart Lab research fellowship.
Apart Hackathons: 3-day research hackathons on impactful AI safety topics with senior mentors, speakers, and judges designed to test participants' talent fit, pilot original research, and connect with like-minded collaborators.
Apart Lab: A program to guide aspiring researchers toward peer-reviewed AI safety research, providing a network of talented researchers, resources and training, compute, research mentorship, and project management. Our fellows have published their research in major venues, such as NeurIPS, ACL, and ICLR.
For example: Our Code Red Hackathon, co-hosted with METR in March this year, helped 168 participants engage directly with a novel research agenda, accelerated METR's work (via 230 new evaluation ideas, 108 specifications, and 28 implementations), and led to ten new fellows in Apart Lab. They all contributed challenge implementations for METR's task standard during the hackathon, and are now (6 months later) finishing up five original research projects in Apart Lab, with three still in progress and two already accepted at the NeurIPS workshops System 2 Reasoning At Scale ("CryptoFormalEval") and EvalEval 2024 ("Rethinking CyberSecEval"). One of the team members describes the impact as follows: "Apart's massively accelerated my progress, offering me the support to carry out meaningful research". Of the ten, six are now transitioning from careers in tech and cybersecurity to AI safety. Four are still studying.
Your funding will support our growth of the above approach in 2025: Letting more than 2,000 participants' test their fit for AI safety research, supporting over 200 lab fellows, and establishing more impact-oriented research collaborations, such as the one above.
Apart aims to reduce catastrophic risk from AI through impactful technical work that makes AGI policy, governance, and deployment safer and more trustworthy. We do this by empowering new research and new researchers at scale through our Apart Sprint research hackathons and our Apart Lab fellowship. Specifically, our goals are to:
Discover AI safety talent and accelerate their career transition: How can technical professionals world-wide contribute, test their fit, and transition careers into AI safety?
Connect talent with collaborators and mentors in engaging monthly research hackathons where they can test their fit, network with others, and directly contribute
Provide a community and structured platform for research with peer review, advising, and a documented process towards impactful publications
Provide remote-first opportunities compatible with full-time employment for senior engineers and researchers to transition into AI safety
Push the frontier of impactful AI safety research: How can we increase the volume of empirical progress, support the existing ecosystem, and make AI safe and beneficial?
Research hackathons can yield surprisingly high quality projects (such as "EscalAItion", "Sandbag Detection through Model Degradation", "Detecting Implicit Gaming", "Neuron Behavior via Dataset Example Pruning and Local Search", and "Internal Conflict in GPT-3")
Hackathons provide fast engagement of hundreds of potential researchers to solve volume questions for impactful AI safety organizations (such as CAIF, METR, Apollo Research, and others)
Apart Lab fellows can lead original projects that influence research fields like evaluations, interpretability, and alignment.
Ensure worldwide competence on AI risk through a global network of AI safety researchers: How can local groups become established, develop local talent, and become national leaders in research?
Provide the opportunity for local organizers to either co-host a local version of our global hackathons or make their local events global with our help
Connect researchers and groups with the international research community through online events, research collaborations, and resources
Apart complements existing AI safety organizations (FAR AI, MATS, CAIS, etc.) through our:
Research hackathons: We have not seen events like our hackathons anywhere. Our hackathons are designed for the participants to create pilot experiments during the weekend instead of prior to the weekend as with academic workshops. This means we can 1) get fast results on the latest topics, 2) create weekends of high counterfactual value to participants, and 3) meritocratically (based on the reviews from our hackathon judges) invite Apart Lab fellows from the hackathons. Submissions are open-ended and reviewed by a jury of established researchers in the field.
Fellowship structure: Promising hackathon teams are invited to our 4-6 month Apart Lab fellowship where they work part-time through a four stage structured process to publish a peer-reviewed paper (or other equally impactful deliverable) as lead authors. We impose a weekly meeting, review and evaluate deliverables for each stage, and connect them with advisors, among much else.
Agenda: We aim to enable broad exploration of empirical research in impactful domains of technical AI safety research. With our research hackathons and fellowship, we develop a multitude of AI safety pilot experiments. We collaborate with competent partners to provide expert discussion and guidance to specific teams as they develop their work into impactful publications (or other deliverables).
Impact per dollar: A research team can publish a paper at just $7,266 in direct costs, including compute, conference participation, and staff support—controlled for a team's probability of publication success. Additionally, we increase our counterfactual impact per dollar by supporting as many teams' conference participation, compute, and stipends as possible with research grants independent of the core funding you support with this grant. Our hackathons cost between $2,000 and $12,000 to run but are often subsidized by our collaborators similarly to the Lab projects.
Research volume: We are able to solve specific research problems that require larger volume, such as LLM evaluation tasks, control techniques, or demonstrations. Our programs have led to more than 350 research projects submitted during hackathons and 13 peer-reviewed publications.
Target demographic: We target senior professionals who are highly competent in technical domains relevant to AI safety. Our programs are designed to appeal to action-oriented individuals who would like a direct outcome and might not be able to fully relocate or dedicate all their time to career exploration.
Global focus: We have hosted hackathons with partners across all inhabited continents and support talented researchers globally. Our approach discovers talent from underserved demographics and based on merit in the hackathons, disregarding CVs. This crucially improves the diversity of viewpoints in AI safety and selects for teams that are able to iterate fast.
We are seeking funding to support our upcoming growth during December 2024 through March 2025. With recent additions of top talent, this funding will help us capitalize on our current momentum and potential.
Our total funding need for this period is $452k, broken down as follows:
Staff ($284k – 63% of total):
Salaries for core team, advisors, contractors, and staff development.
Publication and Events Costs ($46k – 10% of total costs):
Travel for conference presentations (core team and Apart Lab fellows), hackathon development, and team events.
Research and Administrative Costs ($83k – 18% of total):
Compute resources, office space, fiscal sponsorship fees, equipment, software subscriptions, and other services.
Buffer/Contingency ($39k – 9% of total):
Allocated for unforeseen program and personnel expenses.
Why Fund Apart Now?
We offer exceptionally high return on investment
direct costs per Apart Lab fellow average ~$3,303 and can be close to zero, by helping fellows pursue direct research grants
hackathons activate participants at as low as $30 per person (even excluding sponsorships!)
We are in a unique position to accelerate global talent and act as a connector in the AI safety ecosystem
20+ global AI safety research hackathons with more than 2,100 participants all over the globe, 350+ project entries and an NPS of 64
Impactful collaboration partners such as METR, Anthropic, the Cooperative AI Foundation, Apollo Research, Entrepreneur First, and FAR AI
Our approach is validated and ready for scaling
2x growth of Apart Lab batch size the past 6 months from 17 fellows, 7 projects per quarter (Q1 2024) to 35 fellows, 11 projects per quarter (Q3 2024)
13 peer-reviewed papers published since 2023 (6 main conference papers, 9 workshop acceptances), including at NeurIPS, ICLR, ACL, EMNLP.
Significant career benefits reported from fellows and hackathon participants, incl. positions at FAR.ai, Cooperative AI foundation, Lionheart ventures, and more.
Given current AGI development timelines, we have an urgent need to maintain momentum and expand impactful AI safety research, a place where we believe Apart has now become a safe bet
Our team at Apart Research comprises experienced professionals in AI safety, research, and community building:
Jason Schreiber (Hoelscher-Obermaier) (Co-Director): PhD in Physics, AI safety expertise from ARC Evals (METR) and PIBBSS, 5 years as AI research engineer. Leads Apart Lab, guiding 70+ fellows across 25 research projects in 2024
Esben Kran (Co-Director): AI safety researcher and entrepreneur. Co-founded Apart Research in 2022. Organized 20+ global hackathons with 1,800+ participants and 340+ project entries.
Natalia Pérez-Campanero Antolín (Research Project Manager): PhD from Oxford, project management experience from Royal Society, where she supported 100+ entrepreneurs. Supervises research teams and develops support infrastructure.
Archana Vaidheeswaran (Community Program Manager): Leadership experience at Women Who Code and Women in Machine Learning. Has organized events for over 2,000 participants. Designs the community experience, coordinates our global hackathons, and improves participant engagement.
Our team (8 FTE total) also includes
a research communications specialist supporting dissemination of hackathon and lab outcomes
a research acceleration engineer helping our community scale up their experiments
two research assistants
one funding and operations associate
as well as an extensive network of senior research advisors and mentors.
AI safety research output
13 peer-reviewed papers published since 2023 (6 main conference papers, 9 workshop acceptances), including at NeurIPS, ICLR, ACL, EMNLP. Our work was cited by OpenAI's Superalignment team and Apart team members contributed to Anthropic’s “Sleeper Agents” paper.
Notable Apart Lab publications include:
“Interpreting Learned Feedback Patterns in Large Language Models” (NeurIPS 2024)
“Interpreting Context Look-ups in Transformers: Investigating Attention-MLP Interactions” and “Towards Interpretable Sequence Continuation: Analyzing Shared Circuits in Large Language Models”, (both at EMNLP 2024)
"Large Language Models Relearn Removed Concepts" (ACL 2024; Best Paper Award at TAIS 2024).
"Understanding Addition in Transformers" (ICLR 2024).
"Detecting Edit Failures In Large Language Models" (ACL 2023).
Fellowship Growth.
2x growth in past 6 months from 17 fellows, 7 projects (Q1 2024) to 35 fellows, 11 projects (Q3 2024)
Testimonials from fellows highlighting our impact:
"If I land a job in AI Safety, it will be because of Apart Lab's help." — Philip Quirke, Apart Lab fellow, hired as a Project Manager at FAR AI
“I participated in a few Apart Hackathons that led me to apply to LASR Labs in London. I met a lot of people as part of working on sprints and was introduced to a lot of pertinent topics within AI safety through them. The paper that we produced as part of LASR Labs is published and was accepted at a workshop at NeurIPS. I'm also working on a research project as part of the Apart Fellowship and we're looking to publish soon.” – Nora Petrova, Staff AI Researcher at Prolific and current Apart Lab fellow
“[Apart hackathons] helped me learn about mech interp for the first time, find a collaborator and write my first research blog post. I now work full time as a mech interp researcher, and Apart hackathons substantially accelerated / partly caused this happening.” – Joseph Miller, hackathon participant and current MATS scholar
Global Engagement
20+ global AI safety research hackathons, 2,100+ participants, 350+ project entries.
NPS = 64: Average score of 8.9 out of 10; impact on their decision to pursue AI safety careers at 5.7 out of 10.
Our hackathons can be the spark for pursuing AI safety careers:
“Today, because of Apart Research's initial spark, I'm thriving in a full-time AI safety research role, having left my engineering position behind after graduation. From being a curious engineering student to a published researcher with a dedicated mentor, my career trajectory has been utterly transformed in just a few months. I'm eternally grateful for the comprehensive preparation, collaborative atmosphere, and invaluable connections that Apart Research fostered.”
— Chandler Smith, Hackathon Participant, now research engineer at the Cooperative AI Foundation
Collaborations
Partners include METR, Anthropic, the Cooperative AI Foundation, Apollo Research, Entrepreneur First, and FAR AI.
Sponsors include METR, Apollo Research, Noema, Future of Life Institute, Flashbots, CeSIA, Entrepreneur First and Sage.
Collaborators include Beth Barnes, Neel Nanda, Bo Li, Haydn Belfield, Christian Schroeder de Witt, Lewis Hammond, Sam Watts, Marius Hobbhahn, Eric Ho, Emma Bluemke, Ian McKenzie, Alex Pan, and Alice Gatti, among many others.
Insufficient net-positive impact on AI safety
Mitigation:
Rigorous project selection, focusing on frontier agendas applicable to AI risk reduction.
Collaborate with aligned partners and advisors to improve project impact.
Maintain a responsible disclosure policy to avoid info hazards.
Measurement: Track adoption of our research by key AI safety institutions and organizations.
Limited career impact for participants
Mitigation:
Strategically focus on improving the career transition follow-up.
Connect hackathon participants and lab fellows with established researchers and orgs through events, advising, co-authoring opportunities, and participation in academic conferences
Focus on strengthening researchers' resumes through peer-reviewed publication and improved dissemination.
Measurement: Track post-program placement rates and long-term career trajectories.
Scaling challenges
Mitigation:
Build on our validated approach to positive impact for participants and technical projects.
Prioritize direct impact in new interventions for our fellowship and hackathon programs.
Diversify revenue sources to support strategic growth.
Measurement: Monitor key performance indicators (e.g., publications per fellow, accepted grants per project, cost per outcome) during scaling.
Given our strong track record, we believe that with careful attention to these potential risks and the implementation of appropriate mitigation strategies, Apart Research has a high likelihood of continued success.
In the past 12 months, we've raised over $500,000 from a combination of sources including LTFF, Foresight Institute, and ACX Grants.
ADAM
about 22 hours ago
Could you briefly clarify why AI safety matters that much and that we don't even have the machine which is the source of all potential risks? All the best.