PIBBSS
Fund unique approaches to research, field diversification, and scouting of novel ideas by experienced researchers supported by PIBBSS research team
Grace Braithwaite
A Cambridge Biosecurity Hub and Cambridge Infectious Diseases Symposium on Avoiding Worst-Case Scenarios
Tianyi (Alex) Qiu
Early exploration, agenda-setting, technical infrastructure, and early community building
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Michael Dello-Iacovo
Apart Research
Incubate AI safety research and develop the next generation of global AI safety talent via research sprints and research fellowships
Nuño Sempere
ALERT := Active Longtermist Emergency Response Team
Luan Rafael Marques de Oliveira
Support to translate BlueDot Impact’s AI alignment curriculum into (Br) Portuguese to be used in university study groups and an online course
Joel Becker
Boosting advocacy for investment in and deployment of technologies for improving indoor air quality
Cadenza Labs
We're a team of SERI-MATS alumni working on interpretability, seeking funding to continue our research after our LTFF grant ended.
David Federico Rivadeneira
Florent Berthet
French center for AI safety
Thane Ruthenis
Louis S. Berman
AI-Risk Education for Politicians
Aaron Maiwald
Emily Kerr-Finell
An effective, replicable, self-sustaining model to sustainable economic success for migrants via microbusiness
Jonathan Stray
The Prosocial Ranking challenge is testing novel LLM-based algorithms on real social media platforms. We have 8 winners but can only fund 3.
Paul Mikov
Seed funding for the initial on-the-ground operational presence of the Bonsai Corp, a start-up NGO aiming to safeguard int peace & security from risks of AI
Johan Fredrikzon
Karpov
I plan to investigate what realistic RL training conditions might lead to LLMs developing steganographic capabilities.