Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
3

Safe AI Germany (SAIGE)

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
jessicapwang avatar

Jessica P. Wang

ProposalGrant
Closes March 6th, 2026
$0raised
$50,000minimum funding
$285,000funding goal

Offer to donate

40 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Safe AI Germany (SAIGE, website: https://safeaigermany.org/ ) addresses an urgent inefficiency in the current landscape: the shortage of people from Germany positioned to positively influence the trajectory of advanced AI development. Germany is the European engine for technical talent: according to the Federal Statistical Office (Destatis), Germany holds the highest share of STEM Master’s degrees in the EU (35%), significantly outperforming the EU average of 25%. Moreover, Germany possesses a world-class engineering sector, together with an annual approx. 300,000 students in STEM (source), >110,000 students in Law (source); Yet global capacity in technical safety and governance remains critically limited. We see a massive structural bottleneck in the local ecosystem: virtually none of this top-percentile talent is funneled to AGI safety. Instead, this hidden reserve of industrial experts flows almost exclusively into traditional roles (e.g. mechanical engineering with 1.3 million employees), simply because they lack the context and infrastructure to apply their skills to AGI safety.

Our mission is to build the centralised infrastructure required to bridge this gap. We are moving beyond volatile student initiatives to create a stable national organisation that supports both groups through:

  1. Upskilling: We are scaling AIS Saarland's research incubator nationwide with its director (who now also works as Technical Lead for SAIGE), providing coverage for cities that currently lack local hubs. This ensures high-potential students and professionals have a clear path into the field. We also give the specific context needed to enter the field by organising hackathons (collaborating with Apart Research), technical deep-dives (with partners like Apollo Research). These serve as a direct entry point for both professionals and students. Details: https://safeaigermany.org/incubator 

  2. Career advising: For professionals, we have confirmed collaboration with High Impact Professionals to provide network and guidance to career transitions. Currently settling on details with another two collaborators.

Currently, the path of least resistance for high-potential German talents is to swarm into standard industry roles. Our theory of change is focused on expanding AI Safety talents by redirection:

Link to ToC Diagram

Note that “Sufficient funding” is still pending at the current time of writing. SAIGE is currently entirely self-funded by its director (me).

We define a successful “AI Safety role” outcome to include either of the following: 

  • Employment: Full-time permanent positions, short-term fixed positions, or  project-based contractor positions at established labs and organisations (e.g. MATS fellowship);

  • Entrepreneurial roles: founding new AI Safety initiatives or non-profits;

  • Civic & ecosystem contribution: High-impact pro-bono work such as advising policymakers or giving educational talks.

Note: Since SAIGE is just starting its journey, although we have plenty of activities listed in our Theory of Change, it is necessary to determine which ones we are prioritising first, according to our goal. See the following planned activities for more details.

Planned activities

  • Phase I (on our website already): We will focus immediately on connecting scattered local assets into a coherent national pipeline. This includes

  • Pipeline for mid/senior-career professionals

  • Incubator program for everyone (fit towards the typical German semester dates, which is unusual for most other countries. However professionals are more than welcome and encouraged to apply and join)

  • Basic infrastructure support for local groups

  • Low-budget online events to gain more outreach traction (e.g. confirmed first launch event with Joshua Landes on March 4th). E.g. Talks on technical/governance topics; Talks by referred contacts from HIP, "how did I transition as a mid/senior-career professional into AI Safety in Germany" etc.

  • Phase II: Scaling & institutionalisation, which will be contingent on funding. This means having events such as

  • SAIGE Day, i.e. An EAG-like event but for AI Safety, focused on the German ecosystem

  • In-person events/retreats, incl. national retreat for local leaderships every 6 months, to provide feedback to each other and to SAIGE; 2 in-person hackathons; 40-50 people career workshop with Successif

  • Establishing a formal legal entity (e.V.) and deploying a centralised tech stack to relieve local organisers of administrative burdens


    Depending on capacity, in Phase II, we could also include events which would likely add to our outreach but are not currently in our priority list, such as an introductory course partnered with AIS Collab to fit into the German semester dates, and also establishing a weekend-intensive program for career professionals to suit more to their schedule and capacity for time commitment. They are currently not listed in Phase I, since the incubator program already aims to include an introductory course, and we currently do not know the exact, quantitative impact of such a program. However, if we gain positive results and receive sufficient funding, we will consider these as well in Phase II. 

Who is on your team? What's your track record on similar projects?

Core Team

Jessica Pu Wang, Director 

  • Experience: Educational background in mathematics. Worked at Epoch AI to develop and later co-organised the FrontierMath project. Specifically, as their Outreach Coordinator to source talents to Tier 4, and co-organised the 2025 FrontierMath Symposium. Top 9 global contributor to Humanity's Last Exam. Previously worked as Global Operations Analyst at Calastone, the largest global funds network. Worked at the International Mathematical Olympiad (IMO) as the sole official photographer, with 1300+ attendees. Also was the President of the Durham University Maths Society, and the Ambassador for the Institute of Physics. 

  • Responsibilities: Oversees the overall progress, design, and execution of activities. Communicates with existing and potential collaborators to ensure activities are carried out smoothly. Also responsible for outreach and fundraising.


Tzu Kit Chan, Executive Advisor 

  • Experience: Educational background in philosophy, comparative literature, and cognitive science. Has been actively involved for 3+ years in the AGI preparedness community, having done operations for MATS, cofounded Caltech AI Alignment, ran Stanford AI Alignment, advising as a board member for Berkeley’s AI Safety, as well as 25 other top universities across the world. He is also the youngest advisor to Malaysia's National AI Office (NAIO) AI Safety and Malaysian Technical Standards Forum (MTSFB) working groups.

  • Responsibilities: Advises the core team on organisational strategy and program architecture in a more high-touch role. Leverages his experience building MATS and Atlas Computing to help design effective fellowship models and programs. Provides executive mentorship to the Director and facilitates strategic connections to the international AGI preparedness research ecosystem.


Manon Kempermann, Technical Program Lead

  • Experience: Educational background in data science and artificial intelligence. Founder of AI Safety Saarland. Currently writing a thesis at Max Planck Institute for Software Systems on red-teaming for misalignment in AI agents. Also a Pathfinder mentor at Kairos. Organised AI Safety events including a talk with Anthropic containing 300+ attendees.

  • Responsibilities: Works with the Director on the nationwide rollout of the Interdisciplinary Research Incubator model, adapting the successful AIS Saarland framework for a broader German context. Helps with the acquisition of technical mentors across Germany. Oversees the strategic pairing of technical mentors with participants to maximise research output.


Leadership advisors

The following individuals have been providing regular, valuable and critical feedback to the planning of SAIGE. They would continue to have monthly or bi-monthly meetings with the director to ensure our goals align with the broader AI Safety ecosystem. Their involvement ensures that SAIGE's actions are continuously shaped by diverse expert perspectives. 

Gergő Gáspár

  • Experience: Educational background in cognitive psychology. Director of Effective Altruism (EA) UK. Previously director of the European Network for AI Safety, after having founded AI Safety Hungary. Also founded EA Hungary, which has provided support to +300 students and professionals to achieve their career goals.

Marcel Steimke

  • Experience: Educational background in mechanical engineering. Director of EA Switzerland. Previously a key organiser for EA Aachen. 7+ years of active fieldbuilding experience.

Callum Hinchcliffe 

  • Experience: Educational background in philosophy. Worked as program manager at Arcadia Impact. Ran and project-managed the Orion AI Governance Initiative for the 2024/25 academic year. Previously an operations and community manager at Pivotal Research, as well as being a Facilitator in AI Governance for both Bluedot Impact and Safe AI London.

Nico Hillbrand

  • Experience: Educational background in computer science. Founder of AI Safety Aachen and co-organiser of EU AI Safety Forum. Also an organiser for EA Aachen. Given the Pathfinder support grant and previously the Open Philanthropy (now Coefficient Giving)'s community builder grant.

What are the most likely causes and outcomes if this project fails?

1. Founder Burnout

  • Cause: Currently, the organisation relies heavily on the high-intensity output of the Director (me). If I burn out or have to step back for health reasons, the momentum could collapse before the institution is self-sustaining.

  • Outcome: The project stalls, accumulated momentum lost.

  • Mitigation: The budget includes salary for tech lead, governance lead, communications manager, and an ops specialist to distribute the workload.

2. Low Conversion Rate

  • Cause: We successfully run the Incubator, but the participants are not quite "top tier" enough to get hired by major AI Safety labs or admitted to MATS.

  • Outcome: Participants eventually return to standard capabilities tech jobs. The counterfactual impact is near zero (money wasted).

  • Mitigation: We will implement a rigorous filter for the Incubator (both ensuring technical capabilities and interviewing on value alignment) to ensure we only accept high-potential, aligned candidates.

3. Marketing Failure

  • Cause: In Germany, safety ("sicherheit") usually means Industrial Safety (ISO 26262, TÜV) or Data Privacy (GDPR). Maybe "AI Safety" will be difficult to land with the pragmatic German engineering culture. In this case, there is a risk that technical talent misinterprets "AI Safety" as boring compliance work or standard cybersecurity, rather than alignment and control of advanced systems.

  • Outcome: Wasting talents. Fail to redirect people to AI Safety since the starting motivation is not aligned.

  • Mitigation: Need to get people's attention to AGI safety. We will frame problems in the language of reliability engineering and formal verification (i.e. concepts that resonate with the German "TÜV mindset") but point toward existential security.

The Ask and Related Goals

Minimum Viable Launch ($50,000):

  • Covers: Legal setup, website ops, and part-time survival salary for me for 4 months. Launching the first incubator cohort.

Full Operational Scale ($285,000):

  • Covers: Full-time salaries for Director + Program Lead for 12 months, full compute budget, and nationwide in-person events.

I will be very happy to be contacted and share more details about this project, if anyone has any feedback or questions!
Email: info@safeaigermany.org

Comments1OffersSimilar8
AlexandraBos avatar

Alexandra Bos

AI Safety Research Organization Incubator - Pilot Program

3
7
$16K raised
Apart avatar

Apart Research

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline

Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors

Technical AI safetyAI governanceEA community
33
39
$131K raised
briantan avatar

Brian Tan

WhiteBox Research’s AI Interpretability Fellowship

~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila

Technical AI safetyEA Community Choice
6
7
$2K raised
JamesFox avatar

James Fox

London Initiative for Safe AI (LISA) Funding

Funding to support the continuation of a dedicated home for leading AI safety research in London and the founding of a research fellowship run by LISA.

4
0
$0 raised
tylerjn avatar

Tyler Johnston

The Midas Project

AI-focused corporate campaigns and industry watchdog

AI governanceGlobal catastrophic risks
2
2
$0 raised
remmelt avatar

Remmelt Ellen

11th edition of AI Safety Camp

Cost-efficiently support new careers and new organisations in AI Safety.

Technical AI safetyAI governance
25
31
$45.1K raised
Yanni-Kyriacos avatar

Yanni Kyriacos

Help launch the Technical Alignment Research Accelerator (TARA)!

3
4
$14.9K raised
Yanni-Kyriacos avatar

Yanni Kyriacos

Bridge Funding for the Sydney AI Safety Hub (SASH)

4
1
$5.17K raised