Abdul karim moro
Crypto identity, tamper-evident audit trails, policy enforcement & kill switch for AI agents — the MIT-licensed standard the EU AI Act demands. Nobody else has
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Galen Wilkerson
Measuring and Visualizing Model Uncertainty During Inference
AISA
Translating in-person convening to measurable outcomes
Karen A. Brown
Filmmakers, musicians, and comedians team w/AI safety researchers transforming technical concepts into compelling viral shorts making critical risks consumable.
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Tony Rost
Resources for journalists, clinicians, and educators before AI consciousness debates calcify.
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Lawrence Wagner
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Larry Arnold
A modular red-teaming and risk-evaluation framework for LLM safety
Jade Master
Developing correct-by-construction world models for verification of frontier AI
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Rufo Guerreschi
Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty
Jess Hines (Fingerprint Content)
Detect polarising story-frames early and build better narratives—fast, practical, adoptable.
Preeti Ravindra
AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety