Pedro Bentancour Garin
Building early AI governance and safety systems focused on alignment, oversight and risk reduction before more capable AI arrives.
Timothy Karsch
Proves observed alignment under monitoring ≠ intrinsic policy. Full simulator, 1,000-scenario audit, and general theory of entity freedom (ϕ_x).
Emma Humphrey
$5,000 USD to bring 16 vetted academics and policy leads to NZ's first AI Safety Conference, ensuring national representation and cross-sector collaboration
Leticia Prados
Designing liability, insurance and fiduciary mechanisms for frontier AI using commercial space law as a structurally precise comparative framework
An AI agent OS with genuine memory, identity, and cognitive continuity
Lucas Kempe
Make "which tool did the agent pick, and why?" an inspectable artifact instead of a vector lookup.
Salvatore Barbera
Building the missing public-mobilisation layer for AI safety in Italy and Southern Europe, starting with autonomous weapons and youth AI literacy.
Nathan Thornhill
An ORCID-gated submission pipeline where a multi-model AI panel plus quality-control layer delivers rigorous peer review without institutional gatekeeping.
Tom Bibby
Social media content across YouTube, Instagram, and TikTok to grow AI x-risk awareness and build political momentum for a global pause.
Fatika Umar Ibrahim
The first AI safety evaluation benchmark for Nigerian indigenous livestock systems testing whether frontier models are safe to deploy in African food systems.
Modeling Cooperation
Software tools and research to quantify coordination failures and inform policy decisions.
Vangelis Gkagkelis
A 6-month pilot testing probabilistic forecasting for AI, misinformation, institutional trust, and social risk in Greece.
Karsten Brensing
Limited Legal Personhood as a Reversible Safety Instrument
Atmadeep Ghoshal
Requesting funding for ICML 2026 spotlight position paper on ML safety for combating intimate partner violence
Aashka Patel
Inspiring India’s Middle‑Schoolers to pursue AI Safety, Governance, and X‑Risk Work
Sankalp Gilda
Two co-authored workshop papers (LLM reasoning, agentic-AI accountability), presented April 2026 in Rio. Asking partial trip reimbursement.
Matthew A Cator
Funding the open-source launch of a working claim-state system and the local firewall bridge that carries verification before voice into governed agent action.
Mu Zi
This round of funding will be used primarily for prototype hardening, artifact packaging, runtime evaluation, and preparation for external review.
Kumari Neha Priya
Urgent funding needed by May 14 for graduate policy training focused on AI governance
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.