Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

Technical AI safety

23 proposals
84 active projects
$5.65M
Grants255Impact certificates20
Marussa-QAI- avatar

Marouso Metocharaki

QAI-QERRA: Open-Source Quantum-Ethical Safeguards for AI Misalignment

Independent Greek researcher advancing open-source quantum-ethical framework with verifiable remorse simulation to reduce misalignment risks in advanced/humanoi

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
0
$0 / $30K
Marussa-QAI- avatar

Marouso Metocharaki

QAI-QERRA: Open-Source Quantum-Ethical Safeguards for AI Misalignment

Independent Greek researcher advancing open-source quantum-ethical framework with verifiable remorse simulation to reduce misalignment risks in advanced/humanoi

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
0
$0 / $30K
Marussa-QAI- avatar

Marouso Metocharaki

QAI-QERRA: Open-Source Quantum-Ethical Safeguards for AI Misalignment

Independent Greek researcher advancing open-source quantum-ethical framework with verifiable remorse simulation to reduce misalignment risks in advanced/humanoi

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
0
$0 / $30K
Marussa-QAI- avatar

Marouso Metocharaki

QAI-QERRA: Open-Source Quantum-Ethical Safeguards for AI Misalignment

Independent Greek researcher advancing open-source quantum-ethical framework with verifiable remorse simulation to reduce misalignment risks in advanced/humanoi

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
0
$0 / $30K
QGResearch avatar

Ella Wei

Technical Implementation of the Tiered Invariants AI Governance Architecture

Achieving major reductions in code complexity and compute overhead while improving transparency and reducing deceptive model behavior

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
🥕

Jacob Steinhardt

Transluce: Fund Scalable Democratic Oversight of AI

Technical AI safetyAI governance
5
2
$40.3K / $2M
Krishna-Patel avatar

Krishna Patel

Isolating CBRN Knowledge in LLMs for Safety - Phase 2 (Research)

Expanding proven isolation techniques to high-risk capability domains in Mixture of Expert models

Technical AI safetyBiomedicalBiosecurity
4
4
$150K raised
Alex-Leader avatar

Alex Leader

Offensive Cyber Kill Chain Benchmark for LLM Evaluation

Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
2
$0 / $3.85M
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
3
1
$10K raised
Josephebrown27 avatar

Joseph E Brown

Architectural Governance to Prevent Authority Drift in AI Systems

A constraint-first approach to ensuring non-authoritative, fail-closed behavior in large language models under ambiguity and real-world pressure

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $30K
Lycheetah avatar

Mackenzie Conor James Clark

AURA Protocol: Measurable Alignment for Autonomous AI Systems

An open-source framework for detecting and correcting agentic drift using formal metrics and internal control kernels

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $75K
Finn-Metz avatar

Finn Metz

AI Security Startup Accelerator Batch #2

Funding 5–10 AI security startups through Seldon’s second SF cohort.

Science & technologyTechnical AI safetyGlobal catastrophic risks
4
6
$355K raised
cybersnacker avatar

Preeti Ravindra

Addressing Agentic AI Risks Induced by System Level Misalignment

AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety

Technical AI safetyGlobal catastrophic risks
5
0
$0 / $4K
seanpetersau avatar

Sean Peters

Evaluating Model Attack Selection and Offensive Cyber Horizons

Measuring attack selection as an emergent capability, and extending offensive cyber time horizons to newer models and benchmarks

Technical AI safety
2
2
$41K raised
XyraSinclair avatar

Xyra Sinclair

SOTA Public Research Database + Search Tool

Unlocking the paradigm of agents + SQL + compositional vector search

Science & technologyTechnical AI safetyBiomedicalAI governanceBiosecurityForecastingGlobal catastrophic risks
1
0
$0 / $20.7K
whitfillp avatar

Parker Whitfill

Course Buyouts to Work on AI Forecasting, Evals

Technical AI safetyForecasting
3
2
$38K / $76K
🦀

Mirco Giacobbe

Formal Certification Technologies for AI Safety

Developing the software infrastructure to make AI systems safe, with formal guarantees

Science & technologyTechnical AI safetyAI governance
2
1
$128K raised
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
9
6
$520 raised
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $23.5K
🍉

L

Visa fee support for Australian researcher to join a fellowship with Anthropic

Technical AI safety
1
0
$4K raised