Pedro Bentancour Garin
Runtime safety, oversight, rollback, and control infrastructure for advanced AI in real-world, high-consequence environments.
Euan McLean
Salary & support for 1 year of leadership of Integral Altruism - a movement bringing EA with wisdom
Mox
An incubator & community space in SF; for doers of good and masters of craft
AI Understanding
Matei-Alexandru Anghel
A Safety Framework for Evaluating AI Humanity Alignment Through Progressive Escalation and Scope Creep
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
AISA
Translating in-person convening to measurable outcomes
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Dominique Gian Leonardo
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Lawrence Wagner
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
Jade Master
Developing correct-by-construction world models for verification of frontier AI
Rufo Guerreschi
Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Galen Wilkerson
Measuring and Visualizing Model Uncertainty During Inference