You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Seed/bridge funding for Geodesic Research, a UK-based, not-for-profit AI safety organization focused on data intensive interventions for AI Safety, largely at the pretraining scale, and conducting thorough follow-up work related to our prior work on alignment pretraining.
Geodesic is building the field of alignment pretraining: conceptually simple, scalable interventions applied during model training rather than relying solely on post-training techniques.
We are building on the premise that post-training alignment is useful but incomplete. Safety training degrades under fine-tuning and breaks down in agentic contexts. If misaligned goals are inherited early, models could fake alignment through post-training. Creating safe and aligned models likely is facilitated by interventions across the entire training stack, not just post-training.
Our inaugural paper demonstrates that curating pretraining data can drastically improve alignment even after post-training, without requiring new training techniques or harming capabilities. Researchers from OpenAI, Google DeepMind, and UK AISI have expressed excitement about this direction.
As METR is known for evals, Redwood for control, and Apollo for scheming, we aim to become the safety organization responsible for securing pretraining;
This $450k provides six months of runway while we confirm funding from larger institutional donors.
Core team salaries (~$330k): Transition salaries for our three researchers, allowing Founder and Co-Director, Puria Radmard, to pause his PhD and join full-time
Operations hire (~$120k): Bringing on Alexandra Narin (co-founder, UK AI Forum) to manage our restructuring, recruiting, and contracting
We have ~$500k worth of H100 compute from UK AISI for Q1 of 2026. Staffing, not compute, is our bottleneck.
Puria Radmard (Founder & Co-Director) — a PhD student in theoretical neuroscience at the University of Cambridge. He led Geodesic’s early work on steganography. Previously, Puria has worked as a machine learning engineer at raft.ai and as a private equity quantitative strategist at Goldman Sachs.
Cameron Tice (Founder & Co-Director) — was a Marshall Scholar at the University of Cambridge, where he recently completed his MPhil on automated research for computational psychiatry. Previously, Cameron was the lead author of Noise Injection for Sandbagging Detection and a Research Manager for the ERA: AI Fellowship.
Kyle O'Brien (Founding Member of Technical Staff) — joined Geodesic through the ERA fellowship. Kyle leads our alignment pretraining research agenda and has developed strong relationships with UK AISI through his previous research on Deep Ignorance. Before joining Geodesic, Kyle was at EleutherAI and Microsoft.
Geodesic was founded July 2025 and is fiscally sponsored by Meridian Impact CIC in Cambridge. We are the only organization outside the frontier labs positioned to openly pursue pretraining safety research at scale.
There are two likely failure modes:
Our promising initial results do not scale to larger models and more realistic evaluations.
We fail to coordinate with Labs and thus our research becomes outdated/inapplicable (due to changes in frontier training practices) or is never implemented (due to a lack of outreach).
~$500k equivalent in H100 compute hours from UK AISI
$210,000 from Open Philanthropy (project grant to Puria and Cameron, pre-Geodesic)
$124,660 from UK AISI (project grant to Puria and Cameron, pre-Geodesic)