Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
2

aCFAR 2025/6 Fundraiser

aCFAR avatar

Anna Salamon

ProposalGrant
Closes January 30th, 2026
$0raised
$500minimum funding
$125,000funding goal

Offer to donate

27 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

We helped create the rationalist and AI safety communities in our early years (they were already here, but our workshops helped prompt many to move to the bay, go into AI safety work, etc.) They also shaped rationalist culture.

We are trying again ~freshly, using what we know now and didn't know then. (But keeping the good parts: CFAR's format + the best 2/3rds of its classes.)

For a detailed and earnest account of what's up now, see:

Our fundraiser post

Our unsolved problems about our workshop

More about our current workshop

(We put a ton of effort into these posts, and I think they really show what's going on inside our work. They're long but you can skip around. Also very happy to field questions on Calendly.)

What are this project's goals? How will you achieve them?

We aim to run several pilot workshops that test and develop our new take on things, and to create clear, detailed LessWrong posts about it to explain what we’ve found and get feedback and insights from others that will help us develop further.

We will achieve these by:

  • Having our ~8 curriculum developers (plus maybe a few additions) continue to run weekly test sessions where we try our stuff one-on-one or one-on-two with volunteers (and try to notice what’s actually going on in our volunteers, that might not be in our theories of how humans work)

  • Having our internal colloquium continue to run every two weeks, where our curriculum developers talks and discuss

  • Running occasional 3-day instructor curriculum development and workshop prep retreats

  • Running pilot workshops (~25 paying participants, ~7 staff, ~5 volunteers, 4.5 days) where we can see what happens when our take on rationality is practiced by a large group all at once. (This is helpful in part because humans are social and hence inhabit cognitive patterns differently when a group is doing it.)

How will this funding be used?

This funding will support our organization as we develop our curriculum, run workshops on participants, and maintain our basic organizational infrastructure (admin, our venue, etc.).

Who is on your team? What's your track record on similar projects?

Our executive director is Anna Salamon, who cofounded CFAR back in 2011. Our team also includes Davis Kingsley, Preston Greene, Jack Carroll, Divia Eden, Stephanie Payor, Galen, and John Salvatier. Many of our team members have been developing and teaching rationality material for years, both at CFAR and in other venues (e.g., Preston taught rationality courses to undergrads while a professor at Singapore University).

What are the most likely causes and outcomes if this project fails?

We fail at our workshop todo items, e.g. because they involve more difficult dynamics than we've realized.

How much money have you raised in the last 12 months, and from where?

We received $150k from SFF in Q2 of 2025 and $166k from a private donor in Q3 of 2025.

CommentsOffersSimilar8
🐝

Sahil

[AI Safety Workshop @ EA Hotel] Autostructures

Scaling meaning without fixed structure (...dynamically generating it instead.)

3
7
$8.55K raised
remmelt avatar

Remmelt Ellen

11th edition of AI Safety Camp

Cost-efficiently support new careers and new organisations in AI Safety.

Technical AI safetyAI governance
25
31
$45.1K raised
TracingWoodgrains avatar

Jack Despain Zhou

Center for Educational Progress: Volunteer Incubator

Bringing together the educators and advocates committed to building and scaling academic excellence

4
1
$0 raised
Apart avatar

Apart Research

Apart Research: Research and Talent Acceleration

Support the growth of an international AI safety research and talent program

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
6
1
$0 raised
Apart avatar

Apart Research

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline

Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors

Technical AI safetyAI governanceEA community
33
39
$131K raised
redJ avatar

Jared Mantell

Augmentation Lab 2025: Prototyping Human-Aligned Futures

A 10-week Harvard/MIT residency exploring human augmentation via 'Rhizome Futurism' to build an interconnected, beneficial future.

Science & technology
1
0
$0 raised
🍋

Jonas Vollmer

AI forecasting and policy research by the AI 2027 team

AI Futures Project

AI governanceForecasting
10
24
$44K raised
caip avatar

Center for AI Policy

Support CAIP’s Advocacy Work in 2025

Advocating for U.S. federal AI safety legislation to reduce catastrophic AI risk.

AI governanceGlobal catastrophic risks
4
3
$0 raised