Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

AI governance

15 proposals
58 active projects
$3.47M
Grants189Impact certificates8
AmritSidhu-Brar avatar

Amrit Sidhu-Brar

Forethought

Research on how to navigate the transition to a world with superintelligent AI systems

AI governanceGlobal catastrophic risks
4
1
$300K / $3.25M
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 / $155K
majorjanyau avatar

Muhammad Ahmad

Building Frontier AI Governance Capacity in Africa (Pilot Phase)

A pilot to build policy and technical capacity for governing high-risk AI systems in Africa

Technical AI safetyAI governanceBiosecurityForecastingGlobal catastrophic risks
1
0
$0 / $50K
sandguine avatar

Sandy Tanwisuth

Alignment as epistemic system governance under compression

We reframe the alignment problem as the problem of governing meaning and intent when they cannot be fully expressed.

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $20K
Brian-McCallion avatar

Brian McCallion

Boundary-Mediated Models of LLM Hallucination and Alignment

A mechanistic, testable framework explaining LLM failure modes via boundary writes and attractor dynamics

Technical AI safetyAI governance
1
0
$0 / $75K
ChristopherKuntz avatar

Christopher Kuntz

Protocol-Level Interaction Risk Assessment and Mitigation (UIVP)

A bounded protocol audit and implementation-ready mitigation for intent ambiguity and escalation in deployed LLM systems.

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $5K
JasrajBudigam avatar

Jasraj Hari Krishna Budigam

TimeAlign v2: contamination-aware evals for small models (16GB GPUs)

Reusable, low-compute benchmarking that detects data leakage, outputs “contamination cards,” and improves calibration reporting.

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $46K
CeSIA avatar

Centre pour la Sécurité de l'IA

From Nobel Signatures to Binding Red Lines: The 2026 Diplomatic Sprint

Leveraging 12 Nobel signatories to harmonize lab safety thresholds and secure an international agreement during the 2026 diplomatic window.

Technical AI safetyAI governance
6
0
$0 / $400K
🦀

Mirco Giacobbe

Formal Certification Technologies for AI Safety

Developing the software infrastructure to make AI systems safe, with formal guarantees

Science & technologyTechnical AI safetyAI governance
2
1
$128K raised
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
9
6
$520 raised
evzen avatar

Evžen Wybitul

Retroactive: Presenting a poster at the ICML technical AI governance workshop

AI governance
1
3
$1.3K raised
chriscanal avatar

Chris Canal

Operating Capital for AI Safety Evaluation Infrastructure

Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide

Technical AI safetyAI governanceBiosecurity
3
7
$400K raised
🍇

David Rozado

Disentangling Political Bias from Epistemic Integrity in AI Systems

An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems

Science & technologyTechnical AI safetyACX Grants 2025AI governanceForecastingGlobal catastrophic risks
1
1
$50K raised
orpheus avatar

Orpheus Lummis

Guaranteed Safe AI Seminars 2026

Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.

Technical AI safetyAI governanceGlobal catastrophic risks
5
3
$30K raised
rguerrschi avatar

Rufo Guerreschi

The Deal of the Century (for AI)

Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty

AI governanceGlobal catastrophic risks
3
5
$11.1K raised
Justin avatar

Justin Olive

Inspect Evals

Funding to cover our expenses for 3 months during unexpected shortfall

Technical AI safetyAI governance
10
4
$50K raised
SamNadel avatar

Sam Nadel

How to mobilize people on AI risk: experimental message testing

Experimental message testing and historical analysis of tech movements to identify how to effectively mobilize people around AI safety and governance

AI governance
3
0
$0 / $52.7K
Leo-Hyams avatar

Leo Hyams

Fund a Fellow for the Cooperative AI Research Fellowship!

A 3-month fellowship in Cape Town, connecting a global cohort of talent to top mentors at MIT, Oxford, CMU, and Google DeepMind

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
8
4
$2.53K raised
CillianCrosson avatar

Cillian Crosson

Tarbell Center for AI Journalism

$200k in 1:1 matched funding to support reporting on AI.

AI governance
9
5
$27.6K / $200K
🐷

Armon Lotfi

When Safety Testing Costs More Than Model Training

Multi-agent AI security testing that reduces evaluation costs by 10-20x without sacrificing detection quality

Science & technologyTechnical AI safetyAI governance
0
0
$999 / $15K