Aidan Bridge
Building a physics-governed memory system that makes long-term AI behavior auditable and resistant to deceptive alignment.
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Jess Hines (Fingerprint Content)
Detect polarising story-frames early and build better narratives—fast, practical, adoptable.
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
Lawrence Wagner
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Sara Holt
Short Documentary and Music Video
Avinash A
Formalizing the "Safety Ceiling": An Agda-Verified Impossibility Theorem for AI Alignment
Melina Moreira Campos Lima
Assessing the Climate Potential of Catering Systems in Public Schools and Hospitals
Preeti Ravindra
AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Alex Leader
Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs
Ella Wei
A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.
Mackenzie Conor James Clark
An open-source framework for detecting and correcting agentic drift using formal metrics and internal control kernels
Jade Master
Developing correct-by-construction world models for verification of frontier AI
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
Rufo Guerreschi
Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty
Anthony Ware
Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.