Shep Riley
Running an EA and AIS group, connecting participants to high impact orgs
Bryce Meyer
Sarah Wiegreffe
https://actionable-interpretability.github.io/
Chi Nguyen
Making sure AI systems don't mess up acausal interactions
Kristina Vaia
The official AI safety community in Los Angeles
Apart Research
Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors
Jaeson Booker
Creating a fund exclusively focused on supporting AI Safety Research
Igor Ivanov
Asterisk Magazine
tamar rott shaham
Jim Maar
Reproducing the Claude poetry planning results quantitatively
Connor Axiotes
Geoffrey Hinton & Yoshua Bengio Interviews Secured, Funding Still Needed
Centre pour la Sécurité de l'IA
4M+ views on AI safety: Help us replicate and scale this success with more creators
Mox
For AI safety, AI labs, EA charities & startups
Guy
Out of This Box: The Last Musical (Written by Humans)
Ronak Mehta
Funding for a new nonprofit organization focusing on accelerating and automating safety work.
Florian Dietz
Revealing Latent Knowledge Through Personality-Shift Tokens
Robert looman
Building a transparent, symbolic AGI that runs millions of tokens/sec on CPUs, making safe, explainable AI accessible to everyone.
Yuanyuan Sun
Building bridges between Western and Chinese AI governance efforts to address global AI safety challenges.
Miles Tidmarsh
Enabling Compassion in Machine Learning (CaML) to develop methods and data to shift future AI values