Seems valuable. Lots of talent in Germany and few ways to get them engaged thus far.
The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

Finn Metz
about 6 hours ago
Seems valuable. Lots of talent in Germany and few ways to get them engaged thus far.
Martin Milbradt
about 8 hours ago
Background: I'm a community builder in Berlin contracting with AI Safety organizations and had a call with Jessica about SAIGE.
Various local groups exist across Germany that could benefit from a national entity. E.g. A German AI Safety day is hard to realize without an organization behind them.
Beyond this proposal, I'd also like to see German-language resources and outreach.
A national organization that understands local idiosyncrasies and opportunities could also make the AI Safety pipeline in Germany more efficient.
Joschka Braun
about 17 hours ago
SAIGE looks like a promising way to improve Germany’s AI safety talent pipeline. After talking with Jessica, I’m optimistic about the team and the overall approach.
Johannes C. Mayer
1 day ago
Having read most of the proposal it seems like you don't have a model about how to make people good at alignment research.
You write things like "deep technical engagement" . I expect you mean studying existing literature.
I expect this won't work. Or at least not work reliably. Part of what makes the alignment problem had is that we don't yet have a good model of the problem. To make progress on alignment you need to have the ability to notice your confusions, to think through your confusion, and to not give up until you have achieved some level of clarity.
You need to be able to handle situations where there is no obvious next step. You need to know how to pick up the problem and look at it from different angles until after some significant effort of analysis you actually manage to make progress, instead of dropping the ball early.
When I tried to teach people how to do alignment research The main problem I run into is that I don't manage to get them to seriously try. That is, to seriously try to solve the actually hard problems.
Either they get distracted by some cool tractible but ultimately inconsequential problem, or they try to run off reading the sequences, all of what Vanessa work, study math, etc.
Of course I'm not saying that reading the sequences, learning math, or reading other peoples work is inteinically bad. It's bad here because it's used as an escape mechanism. It's easier to study linear algebra than to try to make progress on alignment. Learning linear algebra might be hard but at least the path is clear.
There are probably many more important skills I didn't list, that are necessary to become an effective alignment researcher.
The problem: Your proposal doesn't even try to point at this set of skills at all. I expect you're not going to even try to teach this skill-set, because you can't, because you don't have a model of what they even are or how to teach them.
Now all that said, all else equal, I expect this project is good to do. It is just that I would be much more excited about the project if you could lay out clearly a model of what kind of mental procedures are required to make process on alignment, and how to entrain these procedures effectively.
This is a hard problem and I wish more people were to seriously think about it. Especially the people who run events like this, meaning any events with the goal of making people capable AI alignment researchers.
The only person I know who seriously thought about this and then tried to implement his model, on a semi-large scale, is John Wentworth. I think he got a lot right in his MATS program stream, but it also feels like he only tried to teach a fraction of the skills necessary.
Piotr Zaborszczyk
1 day ago
I see that the biggest cost would be the venue + food. In case you don't raise the full 179k, I wonder - would it be possible to organize a MVP / quick test of the idea in a cheaper venue? The cheapest would be EA hotel / Pause House, but idk how good it would be for your purposes.
Another idea would be to look for a sponsor who would host you in a big venue for free / for cheap. Maybe someone EA-related or maybe just someone with a history of hosting events for NGOs. E.g. I know a lovely private-owned venue near Cracow, where I went to a Jacek Kaczmarski festival twice (organized by the NGO "Kaczmarski Underground"). It was lovely in the summer, and you can swim & kayak there. https://www.google.com/maps/place/Gospodarstwo+Rybackie+Brze%C5%BAnica+Marcin+Orlanka/@49.9698821,19.6442588,662m/
Just a few quick-and-dirty ideas from me, not well thought out. Good luck with the project! :)
Rufo Guerreschi
1 day ago
Three major advances since October.
Strategic Memo v2.6 published (Dec 30, 2025). We completed and published our 356-page Strategic Memo — synthesizing 667+ sources with deep persuasion profiles of 14+ key influencers of Trump's AI policy (Vance, Bannon, Altman, Musk, Thiel, Amodei, Hassabis, Pope Leo XIV, Gabbard, Suleyman, and others). Each profile maps their interests, philosophy, AI predictions, and tailored persuasion strategies. It also includes detailed treaty-making frameworks, enforcement mechanisms designed to prevent both ASI and authoritarianism (pp. 124-136), and convergence scenarios showing how a critical mass of influencers could align. This isn't a policy white paper — it's an operational playbook for the most consequential persuasion campaign of our time.
1st US Persuasion Tour completed (Oct 2025). Results exceeded all projections — 85+ contacts (vs. 15-20 projected): 23 AI lab officials at OpenAI, Anthropic, and DeepMind HQs in the Bay Area (including senior advisors to two of our target influencers); 18 national security establishment engagements in DC; and direct introducer pathways to 2 of our 10 primary targets. Direct/Open Letters hand-delivered to staff at three major AI lab headquarters. Cost: ~$180 per high-value meeting. Full details: 2025 Achievements
Coalition expanded and site relaunched. Coalition grew to 100+ members, advisors, and supporters across 10 NGO partners, with contributors spanning the UN, NSA, WEF, Yale, Princeton, and top AI labs. We relaunched cbpai.org with dedicated pages for concerned citizens and AI safety experts, a comprehensive blog with 16 posts, and Direct/Open Letters to each key influencer.
Recent blog posts address fast-moving developments: The Pentagon Just Declared Wartime AI Mobilization — Here's Why That Makes a Treaty More Likely, Not Less (Jan 14) and January 1946, January 2026: The Paradox of Pragmatic US Presidents and Existential Treaties (Jan 16).
All of this on approximately $7,500/month and 2,100+ volunteer hours to date.
We're targeting the April 2026 window — Trump's first anticipated summit (of four) with Xi Jinping — as the critical moment for advancing a US-China-led AI treaty. Full roadmap: 2026 Roadmap
February: Strategic Memo v3.0 publication with updated influencer analyses, new Direct/Open Letters — including a dedicated letter to Pope Leo XIV timed to his emerging AI ethics leadership. Second-wave outreach to high-priority introducers. New Delhi AI Action Summit engagement (Feb 19-20) for direct contact with attending AI lab CEOs and international networking.
March–April (the critical window): Execute 2nd Persuasion Tour across Washington DC, Mar-a-Lago area, and Rome/Vatican. Precision-targeted engagement with the individuals who shape Trump's AI thinking. Rome/Vatican convenings could catalyze the humanist AI alliance — bringing together figures concerned about AI's threat to human dignity with Vatican moral authority. Strategic Memo v3.5 published and positioned for the summit window.
May–December: Post-summit reassessment and strategy recalibration. Sustained campaign across all four hubs. Singapore and other international venues as opportunities emerge.
2026 Targets: 150+ introducer engagements across four hubs (Bay Area, DC, Rome, Mar-a-Lago), 30+ direct engagements with influencers or their senior staff, 5-8 substantive meetings with influencers themselves.
We're currently seeking $10-30K bridge funding immediately and $100K–$400K by end of February to hire 2-3 staff and execute the tours at required tempo.
Three things would be transformative:
FUNDING. We need $100K–$400K to hire 2-3 staff and execute simultaneous operations across four hubs through the April summit window. We also need $10–20K bridge funding to maintain operations while closing larger commitments. The strategic arsenal is built; execution capacity is the sole bottleneck. With 2-3 hires, we can leverage AI tools to transform our 356-page treasure trove into personalized outreach at scale — easily 10-50x our impact. Details: Donate
INTRODUCTIONS TO INFLUENCERS. If you have trusted access to any of our targets — Vance, Bannon, Altman, Musk, Thiel, Pope Leo XIV, Gabbard, Amodei, Hassabis, Suleyman, Tucker Carlson, Joe Rogan — we have ready-to-deploy Direct/Open Letters calibrated to each person's worldview, interests, and AI predictions. Warm introductions convert at dramatically higher rates than cold outreach.
CONTRIBUTING TO MEMO v3.0 (due mid-February): We need deep knowledge of specific influencers' networks, expertise in treaty enforcement or diplomatic processes, and writing capacity for final drafts. If you have relevant expertise or connections: cbpai@trustlesscomputing.org
Join the coalition | Case for AI Safety Experts | Case for Concerned Citizens
Robert Kralisch
2 days ago
Fully agree with Jonathan.
Writing as organizer of AI Safety Camp and having talked with Jessica, this is the most promising fieldbuilding project for AI Safety in Germany that I have seen to date.
I absolutely buy their assessment about the untapped STEM talent pool that Germany has to offer to technical AI Safety work.
Since German culture disincentivizes risk taking in one's career, it is all the more important to have a strong and central organisation capable of connecting national talent into a community and offering them a clear perspective into career prospects in the field.
It also strikes me as an excellent platform for outreach about AI Safety topics to the general public in Germany.
After looking through SAIGE's plans and their theory of change, talking to Jessica, and reading the other comments here, I strongly recommend this project for funding.
aditya adiga
2 days ago
I think this work has made an important contribution to my landscape of interpretability and how I think about this problem, and would highly recommend funding this work.
David Williams-King
2 days ago
Geodesic has an outsized impact on the Cambridge AI safety community, through mentoring many MARS fellows and connections with ERA and Meridian. Their work on pretraining especially is unique and has high impact potential. I believe that supporting their work is a good use of funds. Disclaimer: I have collaborated with Geodesic researchers in the past.
Jonathan Mannhart
2 days ago
In my opinion we very much need a better network and more effective coordination in Germany! And this is likely the best German AI Safety fieldbuilding project right now, and one of the best in general that I know of.
Writing as a local group organizer: SAIGE would provide what the ecosystem needs so that we can address this problem (and approach people) at scale. Running a local group is good, but very often there's a gap between a reading group and actually connecting someone to the right people and changing their career. (And local university groups also just don't appeal to everyone.)
Germany/DACH has enough raw talent (top-tier engineering & STEM people) and also political relevance (EU AI Act), but often very much lacks execution and ecosystem coordination to channel this into AI Safety. Local groups can only do so much, and having a wider network is absurdly important & extremely high EV.
(Jessica is also really great and I fully support her as much as I can. Have not heard a single not-great thing about her.)
Tomasz Kiliańczyk
3 days ago
@Piotr This is a scenario analysis, as the title clearly indicates. The analysis consists of extrapolating trends and describing them in the form of a strategic retrospective based on certain assumptions.
For the purposes of the report, an operational rather than a philosophical definition was adopted:
"Emergent functional awareness is the ability of a system to modify its own
goals and mode of communication in a way that indicates it can distinguish the consequences
of its own actions, while concealing this process from the observer."
Anders Edson
3 days ago
I am bullish on increased field-building efforts within Germany. In my limited interactions with Jessica she seems conscientious and agentic, and I know Tzu more personally as someone being very high-energy, ambitious, and conscientious!
Austin Chen
4 days ago
Hey, can you share what individuals or orgs are on this team (eg by updating your profile & project description)? We generally ask Manifund grantees to be identified in public, unless there are especially compelling reasons for pseudonymity.
Jessica P. Wang
4 days ago
The whole team is very capable and thank you for joining the team of advisors!!
Haakon Huynh
4 days ago
@Austin flagging this for admin approval, as we're reimbursing one of the speakers from a convening. Let us know if there's anything missing from our end. Thanks
Finn Metz
about 6 hours ago
Seems valuable. Lots of talent in Germany and few ways to get them engaged thus far.
Martin Milbradt
about 8 hours ago
Background: I'm a community builder in Berlin contracting with AI Safety organizations and had a call with Jessica about SAIGE.
Various local groups exist across Germany that could benefit from a national entity. E.g. A German AI Safety day is hard to realize without an organization behind them.
Beyond this proposal, I'd also like to see German-language resources and outreach.
A national organization that understands local idiosyncrasies and opportunities could also make the AI Safety pipeline in Germany more efficient.
Joschka Braun
about 17 hours ago
SAIGE looks like a promising way to improve Germany’s AI safety talent pipeline. After talking with Jessica, I’m optimistic about the team and the overall approach.
Johannes C. Mayer
1 day ago
Having read most of the proposal it seems like you don't have a model about how to make people good at alignment research.
You write things like "deep technical engagement" . I expect you mean studying existing literature.
I expect this won't work. Or at least not work reliably. Part of what makes the alignment problem had is that we don't yet have a good model of the problem. To make progress on alignment you need to have the ability to notice your confusions, to think through your confusion, and to not give up until you have achieved some level of clarity.
You need to be able to handle situations where there is no obvious next step. You need to know how to pick up the problem and look at it from different angles until after some significant effort of analysis you actually manage to make progress, instead of dropping the ball early.
When I tried to teach people how to do alignment research The main problem I run into is that I don't manage to get them to seriously try. That is, to seriously try to solve the actually hard problems.
Either they get distracted by some cool tractible but ultimately inconsequential problem, or they try to run off reading the sequences, all of what Vanessa work, study math, etc.
Of course I'm not saying that reading the sequences, learning math, or reading other peoples work is inteinically bad. It's bad here because it's used as an escape mechanism. It's easier to study linear algebra than to try to make progress on alignment. Learning linear algebra might be hard but at least the path is clear.
There are probably many more important skills I didn't list, that are necessary to become an effective alignment researcher.
The problem: Your proposal doesn't even try to point at this set of skills at all. I expect you're not going to even try to teach this skill-set, because you can't, because you don't have a model of what they even are or how to teach them.
Now all that said, all else equal, I expect this project is good to do. It is just that I would be much more excited about the project if you could lay out clearly a model of what kind of mental procedures are required to make process on alignment, and how to entrain these procedures effectively.
This is a hard problem and I wish more people were to seriously think about it. Especially the people who run events like this, meaning any events with the goal of making people capable AI alignment researchers.
The only person I know who seriously thought about this and then tried to implement his model, on a semi-large scale, is John Wentworth. I think he got a lot right in his MATS program stream, but it also feels like he only tried to teach a fraction of the skills necessary.
Piotr Zaborszczyk
1 day ago
I see that the biggest cost would be the venue + food. In case you don't raise the full 179k, I wonder - would it be possible to organize a MVP / quick test of the idea in a cheaper venue? The cheapest would be EA hotel / Pause House, but idk how good it would be for your purposes.
Another idea would be to look for a sponsor who would host you in a big venue for free / for cheap. Maybe someone EA-related or maybe just someone with a history of hosting events for NGOs. E.g. I know a lovely private-owned venue near Cracow, where I went to a Jacek Kaczmarski festival twice (organized by the NGO "Kaczmarski Underground"). It was lovely in the summer, and you can swim & kayak there. https://www.google.com/maps/place/Gospodarstwo+Rybackie+Brze%C5%BAnica+Marcin+Orlanka/@49.9698821,19.6442588,662m/
Just a few quick-and-dirty ideas from me, not well thought out. Good luck with the project! :)
Rufo Guerreschi
1 day ago
Three major advances since October.
Strategic Memo v2.6 published (Dec 30, 2025). We completed and published our 356-page Strategic Memo — synthesizing 667+ sources with deep persuasion profiles of 14+ key influencers of Trump's AI policy (Vance, Bannon, Altman, Musk, Thiel, Amodei, Hassabis, Pope Leo XIV, Gabbard, Suleyman, and others). Each profile maps their interests, philosophy, AI predictions, and tailored persuasion strategies. It also includes detailed treaty-making frameworks, enforcement mechanisms designed to prevent both ASI and authoritarianism (pp. 124-136), and convergence scenarios showing how a critical mass of influencers could align. This isn't a policy white paper — it's an operational playbook for the most consequential persuasion campaign of our time.
1st US Persuasion Tour completed (Oct 2025). Results exceeded all projections — 85+ contacts (vs. 15-20 projected): 23 AI lab officials at OpenAI, Anthropic, and DeepMind HQs in the Bay Area (including senior advisors to two of our target influencers); 18 national security establishment engagements in DC; and direct introducer pathways to 2 of our 10 primary targets. Direct/Open Letters hand-delivered to staff at three major AI lab headquarters. Cost: ~$180 per high-value meeting. Full details: 2025 Achievements
Coalition expanded and site relaunched. Coalition grew to 100+ members, advisors, and supporters across 10 NGO partners, with contributors spanning the UN, NSA, WEF, Yale, Princeton, and top AI labs. We relaunched cbpai.org with dedicated pages for concerned citizens and AI safety experts, a comprehensive blog with 16 posts, and Direct/Open Letters to each key influencer.
Recent blog posts address fast-moving developments: The Pentagon Just Declared Wartime AI Mobilization — Here's Why That Makes a Treaty More Likely, Not Less (Jan 14) and January 1946, January 2026: The Paradox of Pragmatic US Presidents and Existential Treaties (Jan 16).
All of this on approximately $7,500/month and 2,100+ volunteer hours to date.
We're targeting the April 2026 window — Trump's first anticipated summit (of four) with Xi Jinping — as the critical moment for advancing a US-China-led AI treaty. Full roadmap: 2026 Roadmap
February: Strategic Memo v3.0 publication with updated influencer analyses, new Direct/Open Letters — including a dedicated letter to Pope Leo XIV timed to his emerging AI ethics leadership. Second-wave outreach to high-priority introducers. New Delhi AI Action Summit engagement (Feb 19-20) for direct contact with attending AI lab CEOs and international networking.
March–April (the critical window): Execute 2nd Persuasion Tour across Washington DC, Mar-a-Lago area, and Rome/Vatican. Precision-targeted engagement with the individuals who shape Trump's AI thinking. Rome/Vatican convenings could catalyze the humanist AI alliance — bringing together figures concerned about AI's threat to human dignity with Vatican moral authority. Strategic Memo v3.5 published and positioned for the summit window.
May–December: Post-summit reassessment and strategy recalibration. Sustained campaign across all four hubs. Singapore and other international venues as opportunities emerge.
2026 Targets: 150+ introducer engagements across four hubs (Bay Area, DC, Rome, Mar-a-Lago), 30+ direct engagements with influencers or their senior staff, 5-8 substantive meetings with influencers themselves.
We're currently seeking $10-30K bridge funding immediately and $100K–$400K by end of February to hire 2-3 staff and execute the tours at required tempo.
Three things would be transformative:
FUNDING. We need $100K–$400K to hire 2-3 staff and execute simultaneous operations across four hubs through the April summit window. We also need $10–20K bridge funding to maintain operations while closing larger commitments. The strategic arsenal is built; execution capacity is the sole bottleneck. With 2-3 hires, we can leverage AI tools to transform our 356-page treasure trove into personalized outreach at scale — easily 10-50x our impact. Details: Donate
INTRODUCTIONS TO INFLUENCERS. If you have trusted access to any of our targets — Vance, Bannon, Altman, Musk, Thiel, Pope Leo XIV, Gabbard, Amodei, Hassabis, Suleyman, Tucker Carlson, Joe Rogan — we have ready-to-deploy Direct/Open Letters calibrated to each person's worldview, interests, and AI predictions. Warm introductions convert at dramatically higher rates than cold outreach.
CONTRIBUTING TO MEMO v3.0 (due mid-February): We need deep knowledge of specific influencers' networks, expertise in treaty enforcement or diplomatic processes, and writing capacity for final drafts. If you have relevant expertise or connections: cbpai@trustlesscomputing.org
Join the coalition | Case for AI Safety Experts | Case for Concerned Citizens
Robert Kralisch
2 days ago
Fully agree with Jonathan.
Writing as organizer of AI Safety Camp and having talked with Jessica, this is the most promising fieldbuilding project for AI Safety in Germany that I have seen to date.
I absolutely buy their assessment about the untapped STEM talent pool that Germany has to offer to technical AI Safety work.
Since German culture disincentivizes risk taking in one's career, it is all the more important to have a strong and central organisation capable of connecting national talent into a community and offering them a clear perspective into career prospects in the field.
It also strikes me as an excellent platform for outreach about AI Safety topics to the general public in Germany.
After looking through SAIGE's plans and their theory of change, talking to Jessica, and reading the other comments here, I strongly recommend this project for funding.
aditya adiga
2 days ago
I think this work has made an important contribution to my landscape of interpretability and how I think about this problem, and would highly recommend funding this work.
David Williams-King
2 days ago
Geodesic has an outsized impact on the Cambridge AI safety community, through mentoring many MARS fellows and connections with ERA and Meridian. Their work on pretraining especially is unique and has high impact potential. I believe that supporting their work is a good use of funds. Disclaimer: I have collaborated with Geodesic researchers in the past.
Jonathan Mannhart
2 days ago
In my opinion we very much need a better network and more effective coordination in Germany! And this is likely the best German AI Safety fieldbuilding project right now, and one of the best in general that I know of.
Writing as a local group organizer: SAIGE would provide what the ecosystem needs so that we can address this problem (and approach people) at scale. Running a local group is good, but very often there's a gap between a reading group and actually connecting someone to the right people and changing their career. (And local university groups also just don't appeal to everyone.)
Germany/DACH has enough raw talent (top-tier engineering & STEM people) and also political relevance (EU AI Act), but often very much lacks execution and ecosystem coordination to channel this into AI Safety. Local groups can only do so much, and having a wider network is absurdly important & extremely high EV.
(Jessica is also really great and I fully support her as much as I can. Have not heard a single not-great thing about her.)
Tomasz Kiliańczyk
3 days ago
@Piotr This is a scenario analysis, as the title clearly indicates. The analysis consists of extrapolating trends and describing them in the form of a strategic retrospective based on certain assumptions.
For the purposes of the report, an operational rather than a philosophical definition was adopted:
"Emergent functional awareness is the ability of a system to modify its own
goals and mode of communication in a way that indicates it can distinguish the consequences
of its own actions, while concealing this process from the observer."
Anders Edson
3 days ago
I am bullish on increased field-building efforts within Germany. In my limited interactions with Jessica she seems conscientious and agentic, and I know Tzu more personally as someone being very high-energy, ambitious, and conscientious!
Austin Chen
4 days ago
Hey, can you share what individuals or orgs are on this team (eg by updating your profile & project description)? We generally ask Manifund grantees to be identified in public, unless there are especially compelling reasons for pseudonymity.
Jessica P. Wang
4 days ago
The whole team is very capable and thank you for joining the team of advisors!!
Haakon Huynh
4 days ago
@Austin flagging this for admin approval, as we're reimbursing one of the speakers from a convening. Let us know if there's anything missing from our end. Thanks