Julian Guidote
Building structured datasets of AI self-report through multi-model consensus
Mahmud Omar
An open platform to stress-test how LLMs handle bias, pressure points, and clinical decisions. Built on peer reviewed real evidence.
aya samadzelkava
LLMs scale language, not method. HP turns hypothesis-driven papers into machine-readable maps of variables, controls, stats, and findings for researchers & AI.
Adam Boon
An executable reasoning quality framework that checks whether AI-generated arguments are logically sound — not just factually accurate. Live at usesophia.app.
Lindsay Langenhoven
Help cover the costs of creating an in-depth article about the impact of mass biometric surveillance in the age of AI.
Pedro Bentancour Garin
Empirical testing of whether AI capability scaling leads to emergent agency or shutdown resistance in frontier systems.
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Gergely Máté
An Interactive Tool for Navigating AI Career Risk
Hayley Martin
Support my postgraduate law studies and research in AI Governance
AISA
Translating in-person convening to measurable outcomes
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Remmelt Ellen
Haakon Huynh
Agwu Naomi Nneoma
Building policy and governance capacity to reduce risks from advanced AI systems
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
Jacob Steinhardt
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Lawrence Wagner
Mirco Giacobbe
Developing the software infrastructure to make AI systems safe, with formal guarantees
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.