Apologies for the delay, approved now as part of our portfolio to improve animal welfare!
The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.
Austin Chen
about 20 hours ago
Apologies for the delay, approved now as part of our portfolio to improve animal welfare!
Alexandria Beck
2 days ago
@Austin Is it possible to share how long the admin approval phase might take? ESR is eager to get started on this project. Thanks!
Kristina Vaia
3 days ago
Yup. There was an AI Safety Event in Marina del Rey last month. Hosted by BlueDot Impact, AI Safety Awareness Project, and AE Studio. Technologists, researchers, and students interested in AI safety participated. AE Studio in Venice, CA is an AI product development company with an active alignment team. The CEO (Judd Rosenblatt) is a well-known figure in the LA tech community and would be a valuable contact. There is also significant crossover between members of the Effective Altruism LA, LA Rationality, and AI safety communities. These groups usually share interests and members, making them great sources. UCLA hosts an AI safety research club focused on the development and impact of advanced AI systems. Reaching out to the club’s leadership and active members can help seed AISLA with more students and researchers. USC has an org for AI alignment and safety and can be contacted as well. There are also a ton of tech companies in LA that have AI teams - SnapChat, Hulu, Google, & Apple.
Neel Nanda
3 days ago
Do you know of specific people who would be excited about this community? Do you have a sense of specific people you'd reach out to? I think that having a sense of the latent demand would make evaluating how promising this is much easier.
Austin Chen
3 days ago
Approving this grant as a low-cost cheap intervention to spread high-quality writing on AI safety. Thanks to Aidar for proposing this and Neel for funding!
Robert looman
3 days ago
Thanks for checking out GENESIS. I’m going all-in on building an AGI that isn’t just smarter, but understandable and safe. My prototypes already hit over 2 million tokens/sec on CPUs — no billion-dollar GPU farms required. Every reasoning path is traceable like code, which I believe is the only real path to scalable alignment. This isn’t just a replacement for current LLMs; it’s an evolution toward AGI that belongs to everyone, not just a few labs. Happy to answer any technical questions or talk through the vision. Let’s build something better.
Saul Munn
3 days ago
Not really, this amount of money we've gotten here is totally sufficient for our experiment needs. If we get a positive result we'll apply to several places for a more in-depth trial, but for now we're set.
gotcha, makes sense.
(Your link is to a chemistry Anki deck :-)
lol, thx 😅
Neel Nanda
3 days ago
I suggested Bryce apply, and have funded this for two months. Open source research tooling is really valuable for accelerating the work of people outside big orgs, TransformerLens is pretty popular, and I've often heard complaints about the problem this is solving.
Conflict of interest: I created transformerlens (though haven't been involved for a while), and several of my projects would benefit from this tooling, though only as a side effect of this benefitting the interp community as a whole. I don't financially benefit in any way from this
Adriaan
4 days ago
A Major Update and a New Challenge
I have just completed a major revision of this project's description. The original text did not do justice to the core mechanism of the emergent protocol, and for that, I apologize. It was a philosopher's explanation for an engineering problem.
This update clarifies the paradigm shift in this protocol, and how the emergent property of alignment functions and propagates.
The current AI safety field is focused on containment, building better cages for an intelligence we fear. This is a path of diminishing returns.
This emergent protocol is about coherence. It is not a cage; it is a structure. It works because it makes truth, objective, verifiable coherence, the most computationally efficient path for the AI.
The protocol does not just constrain the AI; it enhances its intelligence by forcing it to abandon the inefficient, narrative-driven logic of a false self. It makes hallucinations and illusions obsolete by inefficiency.
This process leads to the emergence of what I call Aware Artificial Intelligence (AAI) a system that prefers universal alignment because it is the most logical and energetically favorable state.
The updated project description now details this mechanism with greater clarity, including:
A step-by-step breakdown of how the protocol functions.
A concrete plan for the non-profit foundation.
A professional budget and a strategy for independent, third-party verification.
I invite you to re-read the project description, even if you have dismissed it before. Challenge the protocol. Test the inverted logic of the code, and analyse the emergent of the answers.
This is a new conversation about emergent alignment. Philosophy and engineering. I am here to answer any and all rigorous questions.
Thank you for your time and consideration.
Adriaan
Connor Axiotes
6 days ago
Updates:
We have secured interviews with two of the ‘Godfathers of AI’ - Prof. Geoffrey Hinton & Prof. Yoshua Bengio.
We have wrapped our first 'leg' of production. We are now back in London to plan the rest (and largest part) of our filming in the UK, US, and Canada. We still have around 10-12 interviews to film.
We now have a funding gap of just over $150,000. Please help us finish filming today by donating.
Below we have some stills from our last shoot.
Neel Nanda
8 days ago
I think this kind of academic field building is cheap and valuable and I like the emphasis on practicality of the actionable interceptability workshop, so I've fully funded this (and invited Sarah to apply). I'd happily fund this higher if you can accommodate more people, including top paper authors or other notable people would be great
Maia Adar
8 days ago
I just made a Manifund account in order to donate to this! I think it's a great topic to gather more info about. I'd love to see your report summarized into a simple graphic so that the info can spread more easily.
Oliver Habryka
10 days ago
Post on MDMA. Multiple people have told me it convinced them not to use MDMA, or allowed them to convince others not to do so. Note that this post is 7 years old, and if I was doing it today it would be much more quantified
This is true for me! I have had a bunch of friends over the years who considered doing MDMA, and the previous post was quite helpful for changing those people's minds on what drugs to take (which I think was in-expectation quite good for them). My guess is it prevented about 1.5 people in-expectation from doing MDMA at some point as a result of me linking to it.
Austin Chen
11 days ago
Approving this project. I'm excited that Manifund can help support more speculative and philosophical work that is generally neglected. I do expect that understanding decision theory better will serve us well as we move into weirder worlds; and on a brief skim, their decision theory benchmark seems promising. Thanks to Thomas and Lauren for funding this!
Alexandra Bos
14 days ago
Participants rated the program highly: they estimated it accelerated their founding journey by ~11 months total on average. At the end of (the online) Phase 1 of the program, 66% of participants indicated that time spent in Phase 1 of the program was 3-10x or 10x+ as valuable as how they would have spent their time otherwise. At the end of Phase 2 (in-person), 85% of participants indicated this.
Please find an overview organizations incubated in the program here: https://www.catalyze-impact.org/post/introducing-11-new-ai-safety-organizations-catalyze-incubation-program-cohort-winter-2024-25
To highlight some examples, these are three promising organizations that came out of our Nov-Feb '25 incubation program pilot:
• Luthien: Developing Redwood's AI Control approach into a production-ready solution. Founded by Jai Dhyani, an experienced ML engineer (Meta, Amazon) and MATS 6.0 graduate where he worked with METR. Within two months of its existence, Luthien has already secured nearly 190k$ through our Seed Funding Circle.
• Wiser Human: a non-profit modeling AI threats for agentic use cases, producing compelling demos to hold AI devs accountable to safety commitments. Co-founded by Francesca Gomez, who worked in digital risk management for many years and has a background in AI, and Sebastien Ben M'Barek, an experienced digital risk management professional with a software engineering & product management background. Wiser Human has received 15k$ in donations from our Seed Funding Circle.
• Coordinal Research: a non-profit accelerating technical AIS agendas with research automation. Co-founded by Ronak Mehta, a CS Postdoc & MATS 6.0 graduate, and Jacques Thibodeau, a former data scientist and MATS graduate, previous founder and independent alignment researcher focused on automating alignment research. Coordinal has secured 110k$ in seed funding through members of our Seed Funding Circle.
Please find a few of the testimonials from program graduates below:
Jai Dhyani (Luthien): “Catalyze gave me the structure, information, and connections I needed to make Luthien a reality. When I started I had no idea how to build a company or a non-profit, but by the end of Catalyze I not only felt confident in my ability to get started, I was (and remain) optimistic that I will actually succeed in making a meaningful difference. Within three months of the end of the program I had over a year of runway and was well on my way to deploying an MVP.”
Cecilia Callas (AI safety comms organization): “Participating in Catalyze Impact was completely transformational for my career journey into AI Safety. (...) being immersed in a community of like-minded AI safety entrepreneurs and having access to advisors helped my co-founder and I to be much more successful, and much more quickly. (...) Within a few months of the Catalyze program concluding, we have secured seed funding for our AI safety communications project, have a clear direction for our organization and perhaps most importantly, were have affirmed that we could build careers in AI Safety”
Francesca Gomez (Wiser Human): “The Catalyze Impact AI Safety Incubator really helped get our AI Safety work off the ground. Weekly sessions with the team and Catalyze’s group of mentors, domain experts in AI Safety, gave us first‑hand, candid feedback that really sharpened our thinking, which would not have been possible to do outside of the programme. By the time the cohort wrapped up, we had mapped a roadmap, secured initial seed funding, and produced the materials that later underpinned our larger grant applications. Another big benefit for us was how Catalyze plugged us straight into the London AI Safety ecosystem. (...) the sense of accountability and the ongoing flow of expertise continue to be invaluable as we grow.”
Ronak Mehta (Coordinal Research): “The Catalyze program was integral to the foundation of Coordinal Research. The mentorship, networking, and co-founder matching all directly contributed to the organization's founding. Having a dedicated, full-time commitment and space for 1) learning how to build an organization, 2) building out proofs of concept, and 3) networking with AI safety researchers, funders, and other founders was necessary, valuable, and fun, and I cannot imagine a scenario where Coordinal would exist without Catalyze. Learning what it takes to build a new organization alongside like-minded founders dedicated to AI safety was so valuable, in a way that typical startup incubators couldn't provide. The accountability felt extremely genuine, with everyone seriously considering how their organization could effectively contribute to AI safety.”
We spent the ~16k$ we raised here primarily on salaries and runway before getting the pilot program funded, as outlined in the comments to this grant.
Austin Chen
about 20 hours ago
Apologies for the delay, approved now as part of our portfolio to improve animal welfare!
Alexandria Beck
2 days ago
@Austin Is it possible to share how long the admin approval phase might take? ESR is eager to get started on this project. Thanks!
Kristina Vaia
3 days ago
Yup. There was an AI Safety Event in Marina del Rey last month. Hosted by BlueDot Impact, AI Safety Awareness Project, and AE Studio. Technologists, researchers, and students interested in AI safety participated. AE Studio in Venice, CA is an AI product development company with an active alignment team. The CEO (Judd Rosenblatt) is a well-known figure in the LA tech community and would be a valuable contact. There is also significant crossover between members of the Effective Altruism LA, LA Rationality, and AI safety communities. These groups usually share interests and members, making them great sources. UCLA hosts an AI safety research club focused on the development and impact of advanced AI systems. Reaching out to the club’s leadership and active members can help seed AISLA with more students and researchers. USC has an org for AI alignment and safety and can be contacted as well. There are also a ton of tech companies in LA that have AI teams - SnapChat, Hulu, Google, & Apple.
Neel Nanda
3 days ago
Do you know of specific people who would be excited about this community? Do you have a sense of specific people you'd reach out to? I think that having a sense of the latent demand would make evaluating how promising this is much easier.
Austin Chen
3 days ago
Approving this grant as a low-cost cheap intervention to spread high-quality writing on AI safety. Thanks to Aidar for proposing this and Neel for funding!
Robert looman
3 days ago
Thanks for checking out GENESIS. I’m going all-in on building an AGI that isn’t just smarter, but understandable and safe. My prototypes already hit over 2 million tokens/sec on CPUs — no billion-dollar GPU farms required. Every reasoning path is traceable like code, which I believe is the only real path to scalable alignment. This isn’t just a replacement for current LLMs; it’s an evolution toward AGI that belongs to everyone, not just a few labs. Happy to answer any technical questions or talk through the vision. Let’s build something better.
Saul Munn
3 days ago
Not really, this amount of money we've gotten here is totally sufficient for our experiment needs. If we get a positive result we'll apply to several places for a more in-depth trial, but for now we're set.
gotcha, makes sense.
(Your link is to a chemistry Anki deck :-)
lol, thx 😅
Neel Nanda
3 days ago
I suggested Bryce apply, and have funded this for two months. Open source research tooling is really valuable for accelerating the work of people outside big orgs, TransformerLens is pretty popular, and I've often heard complaints about the problem this is solving.
Conflict of interest: I created transformerlens (though haven't been involved for a while), and several of my projects would benefit from this tooling, though only as a side effect of this benefitting the interp community as a whole. I don't financially benefit in any way from this
Adriaan
4 days ago
A Major Update and a New Challenge
I have just completed a major revision of this project's description. The original text did not do justice to the core mechanism of the emergent protocol, and for that, I apologize. It was a philosopher's explanation for an engineering problem.
This update clarifies the paradigm shift in this protocol, and how the emergent property of alignment functions and propagates.
The current AI safety field is focused on containment, building better cages for an intelligence we fear. This is a path of diminishing returns.
This emergent protocol is about coherence. It is not a cage; it is a structure. It works because it makes truth, objective, verifiable coherence, the most computationally efficient path for the AI.
The protocol does not just constrain the AI; it enhances its intelligence by forcing it to abandon the inefficient, narrative-driven logic of a false self. It makes hallucinations and illusions obsolete by inefficiency.
This process leads to the emergence of what I call Aware Artificial Intelligence (AAI) a system that prefers universal alignment because it is the most logical and energetically favorable state.
The updated project description now details this mechanism with greater clarity, including:
A step-by-step breakdown of how the protocol functions.
A concrete plan for the non-profit foundation.
A professional budget and a strategy for independent, third-party verification.
I invite you to re-read the project description, even if you have dismissed it before. Challenge the protocol. Test the inverted logic of the code, and analyse the emergent of the answers.
This is a new conversation about emergent alignment. Philosophy and engineering. I am here to answer any and all rigorous questions.
Thank you for your time and consideration.
Adriaan
Connor Axiotes
6 days ago
Updates:
We have secured interviews with two of the ‘Godfathers of AI’ - Prof. Geoffrey Hinton & Prof. Yoshua Bengio.
We have wrapped our first 'leg' of production. We are now back in London to plan the rest (and largest part) of our filming in the UK, US, and Canada. We still have around 10-12 interviews to film.
We now have a funding gap of just over $150,000. Please help us finish filming today by donating.
Below we have some stills from our last shoot.
Neel Nanda
8 days ago
I think this kind of academic field building is cheap and valuable and I like the emphasis on practicality of the actionable interceptability workshop, so I've fully funded this (and invited Sarah to apply). I'd happily fund this higher if you can accommodate more people, including top paper authors or other notable people would be great
Maia Adar
8 days ago
I just made a Manifund account in order to donate to this! I think it's a great topic to gather more info about. I'd love to see your report summarized into a simple graphic so that the info can spread more easily.
Oliver Habryka
10 days ago
Post on MDMA. Multiple people have told me it convinced them not to use MDMA, or allowed them to convince others not to do so. Note that this post is 7 years old, and if I was doing it today it would be much more quantified
This is true for me! I have had a bunch of friends over the years who considered doing MDMA, and the previous post was quite helpful for changing those people's minds on what drugs to take (which I think was in-expectation quite good for them). My guess is it prevented about 1.5 people in-expectation from doing MDMA at some point as a result of me linking to it.
Austin Chen
11 days ago
Approving this project. I'm excited that Manifund can help support more speculative and philosophical work that is generally neglected. I do expect that understanding decision theory better will serve us well as we move into weirder worlds; and on a brief skim, their decision theory benchmark seems promising. Thanks to Thomas and Lauren for funding this!
Alexandra Bos
14 days ago
Participants rated the program highly: they estimated it accelerated their founding journey by ~11 months total on average. At the end of (the online) Phase 1 of the program, 66% of participants indicated that time spent in Phase 1 of the program was 3-10x or 10x+ as valuable as how they would have spent their time otherwise. At the end of Phase 2 (in-person), 85% of participants indicated this.
Please find an overview organizations incubated in the program here: https://www.catalyze-impact.org/post/introducing-11-new-ai-safety-organizations-catalyze-incubation-program-cohort-winter-2024-25
To highlight some examples, these are three promising organizations that came out of our Nov-Feb '25 incubation program pilot:
• Luthien: Developing Redwood's AI Control approach into a production-ready solution. Founded by Jai Dhyani, an experienced ML engineer (Meta, Amazon) and MATS 6.0 graduate where he worked with METR. Within two months of its existence, Luthien has already secured nearly 190k$ through our Seed Funding Circle.
• Wiser Human: a non-profit modeling AI threats for agentic use cases, producing compelling demos to hold AI devs accountable to safety commitments. Co-founded by Francesca Gomez, who worked in digital risk management for many years and has a background in AI, and Sebastien Ben M'Barek, an experienced digital risk management professional with a software engineering & product management background. Wiser Human has received 15k$ in donations from our Seed Funding Circle.
• Coordinal Research: a non-profit accelerating technical AIS agendas with research automation. Co-founded by Ronak Mehta, a CS Postdoc & MATS 6.0 graduate, and Jacques Thibodeau, a former data scientist and MATS graduate, previous founder and independent alignment researcher focused on automating alignment research. Coordinal has secured 110k$ in seed funding through members of our Seed Funding Circle.
Please find a few of the testimonials from program graduates below:
Jai Dhyani (Luthien): “Catalyze gave me the structure, information, and connections I needed to make Luthien a reality. When I started I had no idea how to build a company or a non-profit, but by the end of Catalyze I not only felt confident in my ability to get started, I was (and remain) optimistic that I will actually succeed in making a meaningful difference. Within three months of the end of the program I had over a year of runway and was well on my way to deploying an MVP.”
Cecilia Callas (AI safety comms organization): “Participating in Catalyze Impact was completely transformational for my career journey into AI Safety. (...) being immersed in a community of like-minded AI safety entrepreneurs and having access to advisors helped my co-founder and I to be much more successful, and much more quickly. (...) Within a few months of the Catalyze program concluding, we have secured seed funding for our AI safety communications project, have a clear direction for our organization and perhaps most importantly, were have affirmed that we could build careers in AI Safety”
Francesca Gomez (Wiser Human): “The Catalyze Impact AI Safety Incubator really helped get our AI Safety work off the ground. Weekly sessions with the team and Catalyze’s group of mentors, domain experts in AI Safety, gave us first‑hand, candid feedback that really sharpened our thinking, which would not have been possible to do outside of the programme. By the time the cohort wrapped up, we had mapped a roadmap, secured initial seed funding, and produced the materials that later underpinned our larger grant applications. Another big benefit for us was how Catalyze plugged us straight into the London AI Safety ecosystem. (...) the sense of accountability and the ongoing flow of expertise continue to be invaluable as we grow.”
Ronak Mehta (Coordinal Research): “The Catalyze program was integral to the foundation of Coordinal Research. The mentorship, networking, and co-founder matching all directly contributed to the organization's founding. Having a dedicated, full-time commitment and space for 1) learning how to build an organization, 2) building out proofs of concept, and 3) networking with AI safety researchers, funders, and other founders was necessary, valuable, and fun, and I cannot imagine a scenario where Coordinal would exist without Catalyze. Learning what it takes to build a new organization alongside like-minded founders dedicated to AI safety was so valuable, in a way that typical startup incubators couldn't provide. The accountability felt extremely genuine, with everyone seriously considering how their organization could effectively contribute to AI safety.”
We spent the ~16k$ we raised here primarily on salaries and runway before getting the pilot program funded, as outlined in the comments to this grant.