Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
round header image

Regrants

Projects635Regrantors28About
Melina-M-Lima avatar

Melina Moreira Campos Lima

Public Food Procurement as a Climate Policy Tool in the EU

Assessing the Climate Potential of Catering Systems in Public Schools and Hospitals

Science & technologyAnimal welfareGlobal catastrophic risks
1
0
$0 / $30K
TheaTERRA-Productions-Society avatar

Sara Holt

Paper Clip Apocalypse (War Horse Machine)

Short Documentary and Music Video

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $40K
🐰

Avinash A

Terminal Boundary Systems and the Limits of Self-Explanation

Formalizing the "Safety Ceiling": An Agda-Verified Impossibility Theorem for AI Alignment

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
1
$0 / $30K
AI-Safety-Nigeria avatar

AI Safety Nigeria

Learn & Launch: Monthly Training and Mentorship for Early-Stage AI Safety & Gov.

A low-cost, high-leverage capacity-building program for early-career AI safety and governance practitioners

Technical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $2.5K
Lhordz avatar

Feranmi Williams

Mitigating Systemic Risks of Unchecked AI Deployment

A field-led policy inquiry using Nigeria’s MSME ecosystem as a global stress-test for Agentic AI governance.

AI governanceGlobal catastrophic risksGlobal health & development
1
2
$0 / $8K
🐶

Lincoln Quirk

Civitech Incubator

Building Talent Infrastructure for American Democracy

3
0
$10K / $1M
screener avatar

Victor Hugo

OddScreener - real-time analytics for prediction markets

Real-time analytics terminal for Polymarket & Kalshi

Science & technology
1
0
$0 / $500
🥕

Jacob Steinhardt

Transluce: Fund Scalable Democratic Oversight of AI

Technical AI safetyAI governance
5
2
$40.3K / $2M
DavidKrueger avatar

David Krueger

Evitable: a new public-facing AI risk non-profit

Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.

AI governanceGlobal catastrophic risks
5
3
$5.28K / $1.5M
QGResearch avatar

Ella Wei

Technical Implementation of the Tiered Invariants AI Governance Architecture

Achieving major reductions in code complexity and compute overhead while improving transparency and reducing deceptive model behavior

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
Hayley-Martin avatar

Hayley Martin

Laptop for legal & biodefence policy career transition

Urgent funding for a reliable laptop to enable EA work, legal studies, and biodefence policy preparation

3
0
$525 / $3.1K
AmritSidhu-Brar avatar

Amrit Sidhu-Brar

Forethought

Research on how to navigate the transition to a world with superintelligent AI systems

AI governanceGlobal catastrophic risks
4
3
$365K / $3.25M
Krishna-Patel avatar

Krishna Patel

Isolating CBRN Knowledge in LLMs for Safety - Phase 2 (Research)

Expanding proven isolation techniques to high-risk capability domains in Mixture of Expert models

Technical AI safetyBiomedicalBiosecurity
4
4
$150K raised
aCFAR avatar

Anna Salamon

aCFAR 2025/6 Fundraiser

Revised CFAR workshops. Same Sequences-epistemics, same CFAR classics, more support for individual freedom, sovereignty, and authorship

3
0
$0 / $125K
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
3
1
$10K raised
Alex-Leader avatar

Alex Leader

Offensive Cyber Kill Chain Benchmark for LLM Evaluation

Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
2
$0 / $3.85M
Finn-Metz avatar

Finn Metz

AI Security Startup Accelerator Batch #2

Funding 5–10 AI security startups through Seldon’s second SF cohort.

Science & technologyTechnical AI safetyGlobal catastrophic risks
4
6
$355K raised
cybersnacker avatar

Preeti Ravindra

Addressing Agentic AI Risks Induced by System Level Misalignment

AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety

Technical AI safetyGlobal catastrophic risks
6
1
$0 / $4K
Josephebrown27 avatar

Joseph E Brown

Architectural Governance to Prevent Authority Drift in AI Systems

A constraint-first approach to ensuring non-authoritative, fail-closed behavior in large language models under ambiguity and real-world pressure

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $30K
seanpetersau avatar

Sean Peters

Evaluating Model Attack Selection and Offensive Cyber Horizons

Measuring attack selection as an emergent capability, and extending offensive cyber time horizons to newer models and benchmarks

Technical AI safety
2
2
$41K raised
28regrantors
635projects
$1.9Mavailable funding