Aashka Patel
Redirecting India’s Middle‑Schoolers into AI Safety, Governance, and X‑Risk Work
Arya Jakkli
Samantha Joan Ackary
Discovery Phase for A Local Priorities Research Project for the Philippines
Nicholas Kruus
LLMs could automate intelligence analysis. I wrote the first paper on governing this; $5k buys 2mo to revise it and scope an org research branch
Lawrence Wagner
A benchmark for studying how failures spread across multi-agent AI systems and whether they can be detected and interrupted in time.
Jonathan Elsworth Eicher
Linh Le
Wasim Gadwal
Observability and interpretability toolkit for world models in AI safety and mechanistic interpretability research.
Peter van Hardenberg
sung hun kwag
An open-source safety pilot for detecting metric gaming, pseudo-improvement, and oversight evasion
Rishub Jain
Euan McLean
Salary & support for 1 year of leadership of Integral Altruism - a movement bridging EA with wisdom
Mox
An incubator & community space in SF; for doers of good and masters of craft
Johan Fredrikzon
Designing a Project Funding Proposal
Benjamin Hause
Pedro Bentancour Garin
Runtime safety, oversight, rollback, and control infrastructure for advanced AI in real-world, high-consequence environments.
Matei-Alexandru Anghel
A Safety Framework for Evaluating AI Humanity Alignment Through Progressive Escalation and Scope Creep
Suki Krishna
Investigate how LLMs behave in multi-agent environments particularly how contextual framing and strategic advice can systematically manipulate coord. outcomes
Pu Wang (Jessica)
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Luca Parodi
A short pre-design investigation into a competition format for judgment, strategy, and uncertainty.