Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

Global catastrophic risks

20 proposals
46 active projects
$2.97M
Grants185Impact certificates8
Lhordz avatar

Feranmi Williams

Linnexus AI

Africa’s First AI Safety & Digital Dignity Watchdog

Technical AI safetyAI governanceGlobal catastrophic risksGlobal health & development
1
1
$0 / $10K
Rubies93 avatar

Agwu Naomi Nneoma

Graduate Training for AI Governance and Long-Term Risk Reduction.

Building early-career capacity to reduce AI-driven societal and catastrophic risks through focused graduate study.

Science & technologyTechnical AI safetyAI governanceEA communityForecastingGlobal catastrophic risks
1
0
$0 / $50K
Melina-M-Lima avatar

Melina Moreira Campos Lima

Public Food Procurement as a Climate Policy Tool in the EU

Assessing the Climate Potential of Catering Systems in Public Schools and Hospitals

Science & technologyAnimal welfareGlobal catastrophic risks
1
0
$0 / $30K
TheaTERRA-Productions-Society avatar

Sara Holt

Paper Clip Apocalypse (War Horse Machine)

Short Documentary and Music Video

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $40K
🐰

Avinash A

Terminal Boundary Systems and the Limits of Self-Explanation

Formalizing the "Safety Ceiling": An Agda-Verified Impossibility Theorem for AI Alignment

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
1
$0 / $30K
Lhordz avatar

Feranmi Williams

Mitigating Systemic Risks of Unchecked AI Deployment

A field-led policy inquiry using Nigeria’s MSME ecosystem as a global stress-test for Agentic AI governance.

AI governanceGlobal catastrophic risksGlobal health & development
1
2
$0 / $8K
DavidKrueger avatar

David Krueger

Evitable: a new public-facing AI risk non-profit

Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.

AI governanceGlobal catastrophic risks
5
3
$5.28K / $1.5M
AI-Safety-Nigeria avatar

AI Safety Nigeria

Learn & Launch: Monthly Training and Mentorship for Early-Stage AI Safety & Gov.

A low-cost, high-leverage capacity-building program for early-career AI safety and governance practitioners

Technical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $2.5K
AmritSidhu-Brar avatar

Amrit Sidhu-Brar

Forethought

Research on how to navigate the transition to a world with superintelligent AI systems

AI governanceGlobal catastrophic risks
4
3
$365K / $3.25M
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
3
1
$10K raised
Finn-Metz avatar

Finn Metz

AI Security Startup Accelerator Batch #2

Funding 5–10 AI security startups through Seldon’s second SF cohort.

Science & technologyTechnical AI safetyGlobal catastrophic risks
4
6
$355K raised
QGResearch avatar

Ella Wei

Testing a Deterministic Safety Layer for Agentic AI (QGI Prototype)

A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
cybersnacker avatar

Preeti Ravindra

Addressing Agentic AI Risks Induced by System Level Misalignment

AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety

Technical AI safetyGlobal catastrophic risks
7
1
$0 / $4K
Alex-Leader avatar

Alex Leader

Offensive Cyber Kill Chain Benchmark for LLM Evaluation

Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
2
$0 / $3.85M
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
9
6
$520 raised
Lycheetah avatar

Mackenzie Conor James Clark

AURA Protocol: Measurable Alignment for Autonomous AI Systems

An open-source framework for detecting and correcting agentic drift using formal metrics and internal control kernels

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $75K
XyraSinclair avatar

Xyra Sinclair

SOTA Public Research Database + Search Tool

Unlocking the paradigm of agents + SQL + compositional vector search

Science & technologyTechnical AI safetyBiomedicalAI governanceBiosecurityForecastingGlobal catastrophic risks
1
0
$0 / $20.7K
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $23.5K
🍇

Jade Master

SDCPNs for AI Safety

Developing correct-by-construction world models for verification of frontier AI

Science & technologyTechnical AI safetyGlobal catastrophic risks
2
0
$39K raised
🍇

David Rozado

Disentangling Political Bias from Epistemic Integrity in AI Systems

An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems

Science & technologyTechnical AI safetyACX Grants 2025AI governanceForecastingGlobal catastrophic risks
1
1
$50K raised