LEE SEULKI
Networking and sharing the AIO Integrity Index (11,340 scenarios) with global policymakers to address "Integrity Hallucination" in LLMs.
Connacher Murphy
A flexible simulation environment for assessing strategic and persuasive capabilities, benchmarking, and agent development, inspired by reality TV competitions.
Griffin Walters
AI completes to 100% by default. Our middleware makes that impossible. Human judgment is required at critical decision points enforced by schema
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Cameron Tice
Karen A. Brown
Filmmakers, musicians, and comedians team w/AI safety researchers transforming technical concepts into compelling viral shorts making critical risks consumable.
AISA
Translating in-person convening to measurable outcomes
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Warren Johnson
Novel safety failure modes discovered across 7 LLM providers with 35,000+ controlled inference trials. Targeting NeurIPS 2026.
Remmelt Ellen
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Matthew Farr
I self-funded research into a new threat model. It is demonstrating impact (accepted at multiple venues, added to BlueDot's curriculum).
Boyd Kane
by buying gift cards for the game and handing them out at the OpenAI offices
Jacob Steinhardt
Krishna Patel
Expanding proven isolation techniques to high-risk capability domains in Mixture of Expert models
Tom Maltby
A Three-Month Falsification First Evaluation of CREATE
Mercy Kyalo
Operational costs for AISEA
Lawrence Wagner
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Cefiyana
Developing an Edge-AI framework to reduce response latency to <0.6s, mitigating user cognitive stress and establishing "Digital Pharmacotherapy" standards.