ActiveGrant
$17,732raised
$248,000funding goal

Project summary

Apart is an AI safety research organization that incubates talented researchers and facilitates innovative research in AI safety for hundreds of people around the globe through short research sprints and our Apart Lab research fellowship.

  • Research Sprints (Alignment Jams): The Apart sprints offer weekend-long opportunities for talented individuals worldwide to engage in AI safety research and create original work.

  • Apart Lab Fellowship: The Apart Lab is a 3-6 month long online fellowship guiding promising teams of aspiring researchers to produce significant academic contributions showcased at world-leading AI safety conferences. Apart provides research mentorship and project management to participating teams.

In 2023 alone, Apart has hosted 17 research sprints with more than 1000 participants and over 170 resulting projects; 10 of our 24 Apart Lab fellows have already completed their research, resulting in three publications accepted at top-tier academic ML workshops and conferences (NeurIPS, ACL, ICLR), and five currently under review (1, 2, 3, two with restricted access); two of our fellows are on track to secure research positions with experienced AI safety researchers at Oxford University.

Apart Research Sprints are

  • Career-changing: Sprints are focused on increasing participants' ability and confidence to contribute to AI safety research with a self-reported update towards working on AI safety of 9.15 percentage points (see testimonials for their why).

  • Research incubators: Sprints in 2022 and 2023 resulted in more than 200 research projects. See, for example, our sprint on new failure cases in multi-agent AI systems, resulting in 21 submitted projects, 2 workshop papers at NeurIPS (1, 2), and 2 contributions to a major upcoming review on multi-agent security by CAIF.

  • Global: We offer our sprints both fully remotely and at more than 50 in-person locations across the globe – including top universities like Stanford, UC Berkeley, Cambridge, Oxford as well as comparatively underserved locations (India, Vietnam, Brazil, Kenya) – to invite a global and diverse research community to engage with AI safety.

  • Community-driven: With our decentralized approach, we make it easy for local organizers to run complex events (see testimonials) and get together teams of researchers to think about the most important questions in AI safety (more than 200 research teams participated to date).

Apart Lab Fellowships are

  • Incubators for AI safety researchers: Apart Lab fellows take charge of their own research projects and become first authors on their paper while being supported with research guidance and project management by experienced mentors (see testimonials from fellows).

  • Providers of counterfactual career benefit: We help fellows develop their research skills, connect with senior AI safety researchers (see co-authors), develop their research portfolio (in 2023, 3 conference papers + 5 papers under review) and secured impactful research roles (in 2023, two Apart Lab fellows on track to secure research positions with experienced AI safety researchers at Oxford University).

  • Output-focused: Apart Lab fellows conceive impactful AI safety research, and Apart Lab helps them make it real! We provide the structure of a lab environment, including internal peer review, deadlines for accountability, as well as a default for sharing information (subject to our info-hazard policy) with the research community (see all research outputs) — while prioritizing projects with impact.

  • Remote-first: Apart Lab fellowships are open to aspiring researchers globally without requiring relocation to London or Berkeley.

At Apart, our work is

  • Impact-focused: Apart is dedicated to reducing existential risks from AI via AI safety technical research and governance, with a focus on empirical projects. Apart directly confronts AI safety challenges by facilitating research in mechanistic interpretability, safety evaluations, and conceptual alignment (read more) and by training the next generation of AI safety researchers.

  • Filling important gaps: We aim to scale up AI safety mentorship and make it accessible to everyone across the globe. We maintain a positive, solution-focused, open-minded culture to invite a diverse community to contribute to AI safety research.

  • Embedded in the wider AI safety ecosystem: We work with a wide array of research organizations such as Apollo Research, the Cooperative AI Foundation, the Turing Institute, and Oxford University and have collaborated with researchers at DeepMind and OpenAI. We also actively engage with the effective altruism community, e.g. by speaking at EAGx conferences.

What are this project's goals and how will you achieve them?

Apart aims to reduce existential risk from AI by facilitating new AI safety research projects and incubating the next generation of global AI safety researchers. We achieve this by running global research sprints and an online research fellowship, which are neglected and impactful ways of incubating AI safety research projects and researchers from the global talent pool. This funding will allow us to improve the quality and scale of our incubation efforts over the next 6 months.

Expand the reach of our research sprints

  • In the first half of 2024, we aim to double the number of participants in our AI safety research sprints by engaging 1,000 new sprint participants, achieving this milestone in half the time it took to reach our first 1,000 while maintaining our priority on participant quality.

  • Building on the success of our previous research sprints — with our Interpretability, Evaluations, Governance and Benchmarks events drawing from 60 to over 150 participants each — we will use the grant to improve their reach and quality by bringing in more partners (like CAIF and Apollo Research), schedule our Sprints months in advance, advertising our sprints to more target groups, partnering with new local sites for the events, improving the infrastructure for the events and improving event mentorship and talks.

  • Given the current abundance of work to be done in AI safety, we want to actively explore new agendas in technical AI safety and AI Governance, where possible in cooperation with other organizations (as we have done before with Entrepreneur First and the Cooperative AI Foundation).

Bring more talented researchers into AI safety

  • Having established the Apart Lab, we will increase capacity for the quantity and quality of mentorship to support fellows in becoming active contributors to impactful AI safety projects. Our focus extends to research agendas in technical governance and evaluations, in collaboration with partners specializing in applied alignment and AI security. Our goal is to produce outputs that are directly beneficial for governance and alignment, in addition to our academic publications.

  • We have clear signs – a good publication track record and early signs of being able to place Apart Lab graduates in impactful roles – that the Apart Lab fellowship can accelerate aspiring researchers' impact in AI safety research and help them pick up the skills they need to succeed. We will invite 30 fellows during the Spring 2024 cohorts and significantly improve the quality of mentorship by adding capacity and improving processes.

  • Our goal is to help aspiring AI safety researchers to get to a position where they can meaningfully contribute. We help transition skilled Apart Lab fellows into full-time roles when possible. This grant will also allow us to support the fellows' continued journey.

How will this funding be used?

We seek funding to sustain and grow Apart as a whole. This includes the costs to run our research sprints and research fellowships, support for Apart Lab research fellows, as well as costs for compute, contractors, conference attendance, and payroll.

To sustain these current efforts for the coming 6 months, we calculate the following funding needs:

  • Salary costs: $105k

  • Operational & Admin costs, incl. software, outreach, fiscal sponsorship, workspace and other miscellaneous costs: $33k

  • Research costs, compute, APIs, conference travel: $26k

To grow Apart further over the coming 6 months, we calculate the following funding needs:

a) Expand the Apart core team (additional 2 FTE for six months; onboarding in process): $44k

  • 1FTE Ops and Fundraising support: $22k

  • 1FTE Research assistant for Apart Lab: $22k

b) Offer stipends to Apart Lab fellows based on pre-determined milestones during the fellowship to allow our global community to dedicate more time to solving important issues within AI safety: $30k for 30 fellows during the coming six months at $1k / fellow.

c) Attract and remunerate external senior mentors to provide support for our Apart Lab researchers, further improving the quality of our academic output: $10k for 200 mentor hours at ~$50/hour

Based on the amount of funding received, we will prioritize allocating resources initially to maintain our operations, followed by investing in the growth of our organization, in accordance with the outlined priorities.

Who is on your team and what's your track record on similar projects?

Apart Research is led by Esben Kran (Director) and Jason Hoelscher-Obermaier (Co-director). Our team members have previously worked for ARC Evals (now METR), Aarhus University, the University of Vienna, and 3 AI startups. Find an overview of the Apart leadership on our website.

Our success with both our Research Sprints and the Apart Lab fellowship is best showcased by the following milestones and achievements:

Research Sprint Achievements:

  • Since November 2022, Apart hosted 19 research sprints with 1,248 participants and 209 submitted projects, with research from the sprints accepted at top-tier conferences and workshops

  • We have co-hosted and collaborated with many global research organizations, including Apollo, CAIF, and Entrepreneur First. Additionally, our Research Sprints hosted speakers from DeepMind, OpenAI, Cambridge and NYU, among others

  • We provided real value in our collaboration and contracting for CAIF on a Multi-Agent Risks research report, resulting from our research sprint with over 150 signups and 21 submissions. Their Research Director Lewis Hammond had the following feedback:

    We (the Cooperative AI Foundation) partnered with Apart Research to run a hackathon on multi-agent safety, to feed into an important report. We needed to work to tight deadlines but overall the process of organising the event was smooth, and there were many more participants than I was expecting. Of those I spoke to afterwards, everyone remarked on how much they enjoyed taking part.

  • Our Research Sprints have a meaningful impact on participants with an average net promoter score of 8.6/10 (indicating the likelihood of recommending the event to others) and 80% of participants reporting a 2x-10x or greater value in terms of counterfactual time spent. Below are some testimonials from participants following our Interpretability Research Sprint:

    A great experience! A fun and welcoming event with some really useful resources for initiating interpretability research. And a lot of interesting projects to explore at the end!

    Alex Foote, MSc, Data Scientist - this project was incubated by Apart and eventually turned into a publication at ICLR 2023, with Alex as the Lead author

    The Interpretability Hackathon exceeded my expectations, it was incredibly well organized with an intelligently curated list of very helpful resources. I had a lot of fun participating and genuinely feel I was able to learn significantly more than I would have, had I spent my time elsewhere. I highly recommend these events to anyone who is interested in this sort of work!

    Chris Mathwin, MATS scholar

Apart Lab Achievements:

  • In 2023, Apart Lab fellows published 6 papers in total; among them 3 at ACL and workshop tracks of NeurIPS and ICLR. Our publications have tackled model evaluations (e.g. the robustness of model editing techniques) and mechanistic interpretability (e.g. of interpreting intrinsic reward models learned by RLHF), as well as providing accessible and scalable tools for probing model internals (e.g. neuron2graph and DeepDecipher). With more funding, we would invest more time into mentoring and establishing external collaborations to further improve the impact potential of our research outputs. 

  • Two Apart Lab graduates are on track to secure research positions with experienced AI safety researchers at Oxford University. This demonstrates that Apart Lab can identify and mentor top talent. In the coming six months, we want to significantly increase the number of placements by improving our processes for follow-up support to our graduates and investing more time in follow-up mentoring.

  • Apart Lab also provided significant mentoring value to our fellows as evidenced in the following testimonials:

    As an undergraduate with little experience in academia, Apart was very helpful in guiding me through the process of improving my work through the publication process. They helped me to bridge the gap between a rough hackathon submission and a well-refined conference paper. I’d recommend the Apart Lab Fellowship for anyone looking to break into the research community and work on the pressing problems in AI Safety today."

    "Like lots (most?) people at Apart Lab, I'm trying to transition from one career to another in AI Safety. There is no well-trodden "best" path so an organisation like Apart Lab which is willing to help individuals for free is a god send. My initial entry was the Hackathon - which was a tough weekend, as I came in with little AI knowledge, but I made some (little) progress, and received positive feedback from the moderators. [...] If (when!) I land a job in AI Safety it will be because of Apart Lab's help. It's a great organisation.

We aim to build on and improve upon our existing track record to incubate more high-impact AI safety researchers around the globe.

What are the most likely causes and outcomes if this project fails? (premortem)

We list the most likely possible causes of failures specific to our projects, together with the likely default outcomes and the mitigation strategies we will implement.

  • Info-hazards

    • Outcome: Publishing work that disproportionately helps dangerous capabilities compared to safety

    • Mitigation strategies: Define and refine our publication policy to have internal and external review for info hazards

  • Lack of research taste for Apart

    • Outcome: The events and projects from Apart focus on less impactful or even harmful research topics

    • Mitigation strategies: Obtain external feedback on Apart’s research portfolio; continue to collaborate with external advisors and researchers with complementary backgrounds; and continue exposing our research to scientific peer review

  • Lack of research taste for participants and fellows

    • Outcome: Aspiring researchers spend too much time on fruitless research

    • Mitigation strategies: Our mentors and collaborators continually discuss and evaluate research projects on informative review criteria for research taste and impact; mentor meetings focus on empowering and guiding the fellows' own research

  • Lab graduates do not realize their potential after the fellowship

    • Outcome: Lab fellows either get a non-AI-safety industry job or do not realize their potential within AI safety

    • Mitigation strategies: Besides the output-focused fellowship structure which helps fellows build legible AI safety credentials, we also stay in contact with and follow up with fellows to provide feedback and opportunities

  • Unintentional support of capability-focused organizations

    • Outcome: Our research or partnerships indirectly assists capability-oriented organizations

    • Mitigation strategies: We discuss our partnerships with external advisors and consider the implications of our projects on capabilities advancement (in addition to following our project publication policy)

  • De-motivating AI safety talent

    • Outcome: Talented and initially motivated individuals end up not contributing to AI safety research

    • Mitigation strategies: Gather anonymous feedback on the frequency and quality of mentorship, focusing on mentor availability and skill relevance; monitor and adjust project deadlines to ensure they are not jeopardizing mental health and work-life balance; clear communication around the academic review process to put potential rejections into context; and ensuring that research outputs' impact are communicated.

  • Poor selection of Apart Lab fellows

    • Outcome: We select fellows who do not have the right skills to contribute to AI safety research or cannot do so for logistical reasons

    • Mitigation strategies: Improve reach and targeting of sprints; improve our evaluation processes for Apart Lab candidates, focusing on the research and collaboration skills demonstrated during the sprint and during an initial trial phase.

We believe that, given its past track record, Apart has a high chance of continued success if we devote sufficient attention to these potential risks and implement appropriate mitigation strategies.

What other funding are you or your project getting?

Apart receives several project-specific sponsorships or contractor assignments ranging from $1,000 to $20,000 that are outside of the scope of this grant. Previous sponsors include Apollo Research, EntrepreneurFirst, and CAIF. To support Apart’s continued development in impactful research we are seeking funding from currently open AI safety foundations, including CLR and EA Funds. We anticipate a high counterfactual value of the funding during the next 6 months.

Apart avatar

Apart Research

4 months ago

Progress update

Apart Research Progress Update - September 2024

Key Achievements

  1. Expanded Participation: Our Sprints have gained significant traction in 2024, with over 1000 signups and approximately 100 team submissions at our events.

  2. Growth in Lab Fellows and Projects: The Apart Lab has seen rapid expansion throughout the year. We've progressed from 17 fellows working on 7 projects in Q1 to our current cohort of 37 fellows collaborating on 12 projects in Q3, bringing the total to over 75 fellows (incl. Q2). We believe that this steady growth showcases the increasing demand for our program and our capacity to support more ambitious research endeavors.

  3. Team Expansion:

    • Welcomed two experienced core team members:

      • Natalia Pérez-Campanero Antolín (former program manager for the Royal Society) - Leading research project management at Apart Lab

      • Archana Vaidheeswaran (former board member of Women Who Code) - Managing community and events for hackathons

    • Expanded our in-house mentorship capacity to support the growing number of fellows

  4. Research Impact: Several research papers from our fellows are currently under review at AI and machine learning venues. A selection of these papers:

    • Classifies whether a model is cheating on evaluations, i.e. if the model has been trained on a benchmark

    • Identifies deceptive patterns in fine-tuned LLM conversation, inspired by dark patterns in UI design, finding that Meta's models are manipulative while Anthropic's are not

    • Evaluates cyber offense capabilities from LLM performance on realistic challenges conforming to industry-standard cyber offense risk classifications, showing that models can autonomously perform complex cyber offense operations

    • Examines how to mitigate infectious jailbreaks in realistic multi-agent systems

Ongoing Initiatives

  1. Increasing participation of high-potential talent in hackathons, with a focus on underrepresented communities including senior tech professionals, cybersecurity experts, and individuals with hard science backgrounds

  2. Enhancing the impact of lab research projects by supporting teams in pursuing ambitious and potentially groundbreaking research ideas

  3. Developing an AI safety incubator to facilitate the transition of high-impact research ideas into real-world solutions

  4. Actively recruiting a research engineer to further support our vision of producing impactful research projects

  5. Onboarded a research communications specialist that can help us make all of our above progress more visible to followers of Apart

If you are interested in supporting us on our journey, feel free to use this page, or contact finn@apartresearch.com.

donated $22
manuelallgaier avatar

Manuel Allgaier

7 months ago

Seems valuable and cost-effective, based on meeting the co-founder and the info I found online (I haven't participated nor met other participants). I like the lean, agile structure and the focus on facilitating research that directly contributes as well as helps (young) researchers boost their careers.

donated $22
manuelallgaier avatar

Manuel Allgaier

7 months ago

(Note that I donated only $22 (€20) because I committed to donating that amount as part of a transaction (I bought an event ticket from someone for ticket price + 20€ donation) and I wanted to try out Manifund. I usually donate at the end of the year, I might donate more then.)

Apart avatar

Apart Research

11 months ago

Progress update

Project update

Since the submission on December 18 2023, we have...

  • Reshaped our leadership with a renewed focus on strategy and organizational development, with Fazl Barez changing position from Co-director to Research Advisor and Jason Hoelscher-Obermaier stepping into the Co-director position.

  • Onboarded 16 new fellows working on 7 different research projects. This new cohort stems from 9 different countries (Germany, the UK, Denmark, Sweden, the US, Spain, France, India, and Vietnam) and spans a broad spectrum of experience levels:

    • 2 PhDs (Deep Learning, Astrophysics)

    • 2 PhD students (ML, Statistics)

    • 5 with Master's degrees (Computer Science, Computational Modeling, Complex Systems, Social Science, and Logic, Philosophy and History of Science)

    • 2 Master's students (AI & Data Science, Human-Centered AI)

    • 2 Bachelor's graduates (Mechatronics, Epidemiology)

    • 3 Bachelor's students (Computer Science, Data Science, Mathematics)

  • Had more research coming out of Apart Lab:

  • Collaborated on the Sleeper Agents paper published by Anthropic

NunoSempere avatar

Nuño Sempere

12 months ago

This looks shiny to me. I am considering funding it for a small amount.

Pros:

  • Successes and accomplishments seem valuable

  • Proxies look good

  • Writeup seems thoughtful

Cons:

  • I don't understand why other people haven't funded this yet

  • Maybe this application is exaggerating stuff?

  • Maybe the organization adds another step in the chain to impact, and it would be more efficient to fund individual people instead?

  • Maybe the biggest one: how do I know the success is counterfactual? Say that someone participated in a hackathon/fellowship/etc, and then later got a research position in some Oxford lab. How do I know that the person wouldn't have gotten something similarly impressive in the absence of your organization?

NunoSempere avatar

Nuño Sempere

12 months ago

I guess that another way of expressing the above might be that this seems potentially good, but given the large amount of funding it is asking for, it feels like someone should evaluate this in-depth, rather than casually?

Austin avatar

Austin Chen

12 months ago

@NunoSempere I also think that Apart is interesting; at the very least, I think they have an amount of "gets things done" and marketing power that otherwise can be missing from the EA ecosystem. And they have a really pretty website!

I am similarly confused why they haven't received funding from the usual suspects (OpenPhil, LTFF). On one hand, this makes me concerned about adverse selection; on the other, "grants that OP/LTFF wouldn't make but are Actually Good" would be an area of interest for Manifund. I would be in favor of someone evaluating this in-depth; if you plan on doing this yourself, I'd offer to contribute $2k to your regrantor budget (or other charitable project of your choice eg ALERT) for a ~10h writeup.

See also two previous Manifund projects from Apart's leadership:

Apart avatar

Apart Research

12 months ago

Thank you for the review and the positive words. We'll address your points below. Please let us know if anything remains unclear. We are more than happy to provide more details.

I don't understand why other people haven't funded this yet

Apart has, for much of 2023, been a very small team (2 FTE) fully focused on experimenting with and improving the research sprints and lab fellowship. We simply did not have the capacity to scale up the program and the corresponding fundraising so far. Now, with an additional leadership team member and two half-time assistants on board (4 FTE), we finally have the capacity to tackle scaling up and funding seriously.

Maybe this application is exaggerating stuff?

We have been quite careful not to overstate our case in this proposal and needless to say, all numbers presented are correct. Let us know if you need any more details or evidence.

Maybe the organization adds another step in the chain to impact, and it would be more efficient to fund individual people instead?

We think that a peer-reviewed publication (particularly a first-author publication) is close to a necessity for most research careers and that aspiring AI safety researchers are much less likely to achieve this on the same swift time scale without the right structure and guidance.

Another added benefit of funding us versus funding individual people directly is that, by directly interacting with a very large number of individuals through our research sprints, we can identify and carefully select talented people who would not be obvious targets for other funders, and thereby uncover talent that would otherwise be overlooked.

Maybe the biggest one: how do I know the success is counterfactual? Say that someone participated in a hackathon/fellowship/etc, and then later got a research position in some Oxford lab. How do I know that the person wouldn't have gotten something similarly impressive in the absence of your organization?

As mentioned above, we believe that (a) having a paper (especially as first author) accepted/submitted for peer-reviewed AI safety publication will often be critical to our target demographic's career trajectory and (b) the people we support are less likely to convert a hackathon project into a paper worthy of submission/acceptance at a top venue without our support. Evidence for (a) comes from our fellows in interview processes who report that most of their time in interviews have been discussions on their Apart Lab research project (with one interviewer remarking it as "a strong part of [the] resume" to go through the program). Evidence for (b) is harder to come by but self-reports of our fellows suggest that they believe they could not have achieved a similar output without the help of Apart Lab.

Apart avatar

Apart Research

12 months ago

Thank you for the kind words and the generous evaluation request, @Austin!

NeelNanda avatar

Neel Nanda

12 months ago

"resulting in three publications accepted at top-tier academic ML venues (NeurIPS, ACL, ICLR),"

To add context in case people get misled by this line, the NeurIPS and ICLR papers (N2G here) were workshop papers, as far as I can tell, not main conference papers. For people not in ML, a conference like NeurIPS or ICLR has both conference papers (one of the highest status ways to publish in ML) and workshop papers (lower prestige and less selective, I'd roughly say a workshop paper is 1/3-1/2 as impressive as a conference paper).

To me, the prior is that most hackathon projects are a total flop and don't go anywhere, so helping someone convert it to a workshop paper is still impressive! (But main conference would have been very impressive). And the ACL paper was a main conference paper, which is impressive!

Apart avatar

Apart Research

12 months ago

@NeelNanda That is correct, thank you for the clarification, Neel! The text has been updated and we're also happy to say that there are now 7 papers in review, so we're crossing fingers for the research teams :D