Indrani Mazumdar
Subtitle Evaluates AI-generated actions before they modify or damage real cloud infrastructure.
Wendi Soto
Building the missing safety layer that detects when AI agents stop being what they were deployed to be.
Pedro Bentancour Garin
Empirical testing of whether AI capability scaling leads to emergent agency or shutdown resistance in frontier systems.
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Miles Tidmarsh
Open Welfare Alignment Evals for Frontier Models
Aria Wong
Hayley Martin
Support my postgraduate law studies and research in AI Governance
Connacher Murphy
A flexible simulation environment for assessing strategic and persuasive capabilities, benchmarking, and agent development, inspired by reality TV competitions.
Cameron Tice
Agwu Naomi Nneoma
Building policy and governance capacity to reduce risks from advanced AI systems
AISA
Translating in-person convening to measurable outcomes
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Remmelt Ellen
Habeeb Abdulfatah
Seeking funding to secure API infrastructure and permanently eliminate the rate limits bottlenecking open-source EA grant evaluation.
Galen Wilkerson
Measuring and Visualizing Model Uncertainty During Inference
Jacob Steinhardt
Krishna Patel
Expanding proven isolation techniques to high-risk capability domains in Mixture of Expert models
Matthew Farr
I self-funded research into a new threat model. It is demonstrating impact (accepted at multiple venues, added to BlueDot's curriculum).
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.