Lucas Kempe
Make "which tool did the agent pick, and why?" an inspectable artifact instead of a vector lookup.
NOBUTAKA HATTORI
Integration architecture for human-AI coexistence — by design, not retrofitted. Combining foundational tooling; novel techniques (no prior art), single resear
Vicente Velásquez
We do regular updates on our site! Vinte.app
Nathan Thornhill
An ORCID-gated submission pipeline where a multi-model AI panel plus quality-control layer delivers rigorous peer review without institutional gatekeeping.
Emma Humphrey
$5,000 USD to bring 16 vetted academics and policy leads to NZ's first AI Safety Conference, ensuring national representation and cross-sector collaboration
Fatika Umar Ibrahim
The first AI safety evaluation benchmark for Nigerian indigenous livestock systems testing whether frontier models are safe to deploy in African food systems.
Atmadeep Ghoshal
Requesting funding for ICML 2026 spotlight position paper on ML safety for combating intimate partner violence
Sean Peters
An early-stage AI safety research group based in Sydney, Australia
Aldan Creo
Putting explainability at the forefront of AI text detection
Sankalp Gilda
Two co-authored workshop papers (LLM reasoning, agentic-AI accountability), presented April 2026 in Rio. Asking partial trip reimbursement.
Karsten Brensing
Limited Legal Personhood as a Reversible Safety Instrument
Matthew A Cator
Funding the open-source launch of a working claim-state system and the local firewall bridge that carries verification before voice into governed agent action.
Alex Kwon
If your reward model is an LLM, you cannot tell whether the policy is gaming the reward or actually getting better. We built a simulator instead.
José Wheeler
Identifying and auditing reasoning circuits in LLMs within Algoverse 2026 using Sparse Autoencoders (SAEs).
Kumari Neha Priya
Urgent funding needed by May 14 for graduate policy training focused on AI governance
Mu Zi
This round of funding will be used primarily for prototype hardening, artifact packaging, runtime evaluation, and preparation for external review.
Developing enforceable architectural constraints, safety mechanisms, and certification criteria to keep advanced AI systems aligned and non-conscious
AI Understanding
Building the first browser-based digital laboratory for interactive AI Safety education and failure-mode discovery.
Sardor Razikov
First quantitative framework for measuring when LLMs surrender independent reasoning under authority pressure
Jonathan Elsworth Eicher