Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

AI governance

27 proposals
59 active projects
$3.76M
Grants209Impact certificates8
JessHines- avatar

Jess Hines (Fingerprint Content)

Department of Future Listening: Narrative Risk Radar (UK pilot)

Detect polarising story-frames early and build better narratives—fast, practical, adoptable.

AI governanceForecastingGlobal catastrophic risksGlobal health & development
1
0
$0 / $300K
🥭

AIVA OS: Causal Intelligence for Medicine

Reconstructs longitudinal patient state, identifies causal drivers, and runs simulations to prevent high severity clinical failures under fragmented data.

Science & technologyAI governanceGlobal health & development
1
0
$0 / $500K
Igor-Labutin avatar

Igor Labutin

AI:Save Our Souls

A tech-infused immersive musical. Experience the future of storytelling where artificial intelligence meets the depths of human emotion.

Science & technologyTechnical AI safetyAI governance
2
0
$0 / $120K
🥕

Jacob Steinhardt

Transluce: Fund Scalable Democratic Oversight of AI

Technical AI safetyAI governance
5
2
$40.3K / $2M
DavidKrueger avatar

David Krueger

Evitable: a new public-facing AI risk non-profit

Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.

AI governanceGlobal catastrophic risks
5
3
$5.28K / $1.5M
AmritSidhu-Brar avatar

Amrit Sidhu-Brar

Forethought

Research on how to navigate the transition to a world with superintelligent AI systems

AI governanceGlobal catastrophic risks
4
3
$365K / $3.25M
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
2
2
$10K raised
TheaTERRA-Productions-Society avatar

Sara Holt

Paper Clip Apocalypse (War Horse Machine)

Short Documentary and Music Video

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $40K
🦀

Mirco Giacobbe

Formal Certification Technologies for AI Safety

Developing the software infrastructure to make AI systems safe, with formal guarantees

Science & technologyTechnical AI safetyAI governance
2
2
$128K raised
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
9
6
$520 raised
Alex-Leader avatar

Alex Leader

Offensive Cyber Kill Chain Benchmark for LLM Evaluation

Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
2
$0 / $3.85M
QGResearch avatar

Ella Wei

Testing a Deterministic Safety Layer for Agentic AI (QGI Prototype)

A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
chriscanal avatar

Chris Canal

Operating Capital for AI Safety Evaluation Infrastructure

Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide

Technical AI safetyAI governanceBiosecurity
3
7
$400K raised
evzen avatar

Evžen Wybitul

Retroactive: Presenting a poster at the ICML technical AI governance workshop

AI governance
1
3
$1.3K raised
Josephebrown27 avatar

Joseph E Brown

Architectural Governance to Prevent Authority Drift in AI Systems

A constraint-first approach to ensuring non-authoritative, fail-closed behavior in large language models under ambiguity and real-world pressure

Science & technologyTechnical AI safetyAI governance
1
1
$0 / $30K
Lycheetah avatar

Mackenzie Conor James Clark

AURA Protocol: Measurable Alignment for Autonomous AI Systems

An open-source framework for detecting and correcting agentic drift using formal metrics and internal control kernels

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $75K
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
2
1
$0 / $23.5K
🍇

David Rozado

Disentangling Political Bias from Epistemic Integrity in AI Systems

An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems

Science & technologyTechnical AI safetyACX Grants 2025AI governanceForecastingGlobal catastrophic risks
1
1
$50K raised
orpheus avatar

Orpheus Lummis

Guaranteed Safe AI Seminars 2026

Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.

Technical AI safetyAI governanceGlobal catastrophic risks
5
3
$30K raised

Unfunded Projects

CeSIA avatar

Centre pour la Sécurité de l'IA

From Nobel Signatures to Binding Red Lines: The 2026 Diplomatic Sprint

Leveraging 12 Nobel signatories to harmonize lab safety thresholds and secure an international agreement during the 2026 diplomatic window.

Technical AI safetyAI governance
6
0
$0 raised