Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
2

Seed Funding For Geodesic Research

🥦

Cameron Tice

ProposalGrant
Closes February 28th, 2026
$200,000raised
$200,000minimum funding
$450,000funding goal

Offer to donate

28 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Seed/bridge funding for Geodesic Research, a UK-based, not-for-profit AI safety organization focused on data intensive interventions for AI Safety, largely at the pretraining scale, and conducting thorough follow-up work related to our prior work on alignment pretraining.

What are this project's goals? How will you achieve them?

Geodesic is building the field of alignment pretraining: conceptually simple, scalable interventions applied during model training rather than relying solely on post-training techniques.

We are building on the premise that post-training alignment is useful but incomplete. Safety training degrades under fine-tuning and breaks down in agentic contexts. If misaligned goals are inherited early, models could fake alignment through post-training. Creating safe and aligned models likely is facilitated by interventions across the entire training stack, not just post-training.

Our inaugural paper demonstrates that curating pretraining data can drastically improve alignment even after post-training, without requiring new training techniques or harming capabilities. Researchers from OpenAI, Google DeepMind, and UK AISI have expressed excitement about this direction.

As METR is known for evals, Redwood for control, and Apollo for scheming, we aim to become the safety organization responsible for securing pretraining;

How will this funding be used?

This $450k provides six months of runway while we confirm funding from larger institutional donors.

  • Core team salaries (~$330k): Transition salaries for our three researchers, allowing Founder and Co-Director, Puria Radmard, to pause his PhD and join full-time

  • Operations hire (~$120k): Bringing on Alexandra Narin (co-founder, UK AI Forum) to manage our restructuring, recruiting, and contracting

We have ~$500k worth of H100 compute from UK AISI for Q1 of 2026. Staffing, not compute, is our bottleneck.

Who is on your team? What's your track record on similar projects?

Puria Radmard (Founder & Co-Director) — a PhD student in theoretical neuroscience at the University of Cambridge. He led Geodesic’s early work on steganography. Previously, Puria has worked as a machine learning engineer at raft.ai and as a private equity quantitative strategist at Goldman Sachs.

Cameron Tice (Founder & Co-Director) — was a Marshall Scholar at the University of Cambridge, where he recently completed his MPhil on automated research for computational psychiatry. Previously, Cameron was the lead author of Noise Injection for Sandbagging Detection and a Research Manager for the ERA: AI Fellowship.

Kyle O'Brien (Founding Member of Technical Staff) — joined Geodesic through the ERA fellowship. Kyle leads our alignment pretraining research agenda and has developed strong relationships with UK AISI through his previous research on Deep Ignorance. Before joining Geodesic, Kyle was at EleutherAI and Microsoft.

Geodesic was founded July 2025 and is fiscally sponsored by Meridian Impact CIC in Cambridge. We are the only organization outside the frontier labs positioned to openly pursue pretraining safety research at scale.

What are the most likely causes and outcomes if this project fails?

There are two likely failure modes:

  1. Our promising initial results do not scale to larger models and more realistic evaluations.

  2. We fail to coordinate with Labs and thus our research becomes outdated/inapplicable (due to changes in frontier training practices) or is never implemented (due to a lack of outreach).

How much money have you raised in the last 12 months, and from where?

  • ~$500k equivalent in H100 compute hours from UK AISI

  • $210,000 from Open Philanthropy (project grant to Puria and Cameron, pre-Geodesic)

  • $124,660 from UK AISI (project grant to Puria and Cameron, pre-Geodesic)

CommentsOffers1Similar7
ronakrm avatar

Ronak Mehta

Coordinal Research: Accelerating the research of safely deploying AI systems.

Funding for a new nonprofit organization focusing on accelerating and automating safety work.

Technical AI safety
11
4
$125K raised
chriscanal avatar

Chris Canal

Operating Capital for AI Safety Evaluation Infrastructure

Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide

Technical AI safetyAI governanceBiosecurity
3
7
$400K raised
🥑

Apollo Research

Apollo Research: Scale up interpretability & behavioral model evals research

Hire 3 additional AI safety research engineers / scientists

Technical AI safety
12
14
$339K raised
Siao-Si-Looi avatar

Siao Si Looi

Building and maintaining the Alignment Ecosystem

12 months funding for 3 people to work full-time on projects supporting AI safety efforts

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
8
2
$0 raised
🐸

SaferAI

General support for SaferAI

Support for SaferAI’s technical and governance research and education programs to enable responsible and safe AI.

AI governance
3
2
$100K raised
Arkose avatar

Arkose

Arkose

AI safety outreach to experienced machine learning professionals

Science & technologyTechnical AI safetyGlobal catastrophic risks
4
0
$0 raised
jesse_hoogland avatar

Jesse Hoogland

Scoping Developmental Interpretability

6-month funding for a team of researchers to assess a novel AI alignment research agenda that studies how structure forms in neural networks

Technical AI safety
13
11
$145K raised