Jess Hines (Fingerprint Content)
Detect polarising story-frames early and build better narratives—fast, practical, adoptable.
Reconstructs longitudinal patient state, identifies causal drivers, and runs simulations to prevent high severity clinical failures under fragmented data.
Igor Labutin
A tech-infused immersive musical. Experience the future of storytelling where artificial intelligence meets the depths of human emotion.
Jacob Steinhardt
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
Lawrence Wagner
Sara Holt
Short Documentary and Music Video
Mirco Giacobbe
Developing the software infrastructure to make AI systems safe, with formal guarantees
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Alex Leader
Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs
Ella Wei
A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.
Chris Canal
Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide
Evžen Wybitul
Joseph E Brown
A constraint-first approach to ensuring non-authoritative, fail-closed behavior in large language models under ambiguity and real-world pressure
Mackenzie Conor James Clark
An open-source framework for detecting and correcting agentic drift using formal metrics and internal control kernels
Anthony Ware
Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
Centre pour la Sécurité de l'IA
Leveraging 12 Nobel signatories to harmonize lab safety thresholds and secure an international agreement during the 2026 diplomatic window.