Oliver Klingefjord
Develop an LLM-based coordinator and test against consumer spending with 200 people.
Jesse Hoogland
Addressing Immediate AI Safety Concerns through DevInterp
Brian Tan
~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila
Adam Shai
Fund a new research agenda, based on computational mechanics, bridging mechanism and behavior to develop a rigorous science of AI systems and capabilities.
Lovis Heindrich
Fazl Barez
Unlearning, AI Safety
Garrett Baker
Francis Dierick
Online platform where AIs and humans race to solve puzzles.
PIBBSS
Fund unique approaches to research, field diversification, and scouting of novel ideas by experienced researchers supported by PIBBSS research team
Max Kaufmann
Tianyi (Alex) Qiu
Early exploration, agenda-setting, technical infrastructure, and early community building
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Dan Hendrycks
Tom McGrath
Find the best settings for SAE training we can, then scale across models
Vishnu Muthyala
Glen M. Taggart
By rapid iteration on possible alternative architectures & training techniques
Florent Berthet
French center for AI safety
Hyams
Request for retroactive funding
Matthew Farr
Stipend to upskill under and collaborate with Sahil K and Topos for 4-6 months, seeking to obtain teleological DAGs as the dual of causal DAGs
Charbel-Raphael Segerie