Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

EagleAI

Technical AI safetyAI governanceForecastingGlobal catastrophic risksGlobal health & development
Lhordz avatar

Feranmi Williams

ProposalGrant
Closes February 13th, 2026
$0raised
$15,000minimum funding
$20,000funding goal

Offer to donate

31 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

EagleAI is a global, decentralized watchdog and reporting platform designed to document frontier AI safety failures, with a specialized focus on synthetic sexual exploitation and deceptive deepfakes. While incidents like the Grok/BBC controversy highlight catastrophic "jailbreaks" in the West, these harms are often amplified and invisible in the Global South. EagleAI provides a secure, independent infrastructure for victims, researchers, and journalists to submit verifiable evidence of AI-generated abuse. By transforming raw incident data into standardized global safety signals, we provide the empirical evidence needed by international regulators (AU, EU, NITDA) to enforce accountability and "pre-deployment" safety standards for frontier models.

What are this project's goals? How will you achieve them?

Our mission is to build a Global Safety Feedback Loop that bridges the gap between local harms and global policy.

Establish a Global Taxonomy: We will develop a structured reporting framework (based on OECD standards) to categorize AI harms by severity, model type, and "jailbreak" method.

Deploy the EagleAI Portal: We will build a secure web and mobile-responsive application for global incident submission, ensuring data privacy for victims.

Research & Evidence Synthesis: We will produce bi-annual "Global Threat Landscape" reports and specific policy briefs to inform cyber-security and legislative bodies.

Capacity Building: Through virtual outreaches and partnerships (e.g., PLASMIDA), we will train stakeholders to identify, document, and report subtle model failures that lab-based evals miss.

How will this funding be used?

We are requesting $20,000 for a high-intensity 6-month launch phase:

Platform Engineering ($10,000): Development of the secure reporting backend, encrypted storage, and user interface using low-cost, scalable cloud infrastructure.

Forensic Research & Data Analysis ($5,000): Stipends for specialized researchers to analyze submissions, verify "jailbreak" techniques, and draft policy evidence.

Outreach & Global Awareness ($4,000): Targeted digital campaigns to ensure victims and researchers in diverse regions (starting with West Africa) are aware of the reporting tool.

Legal & Operational Setup ($1,000): NGO registration (CAC) and compliance filing.

Who is on your team? What's your track record on similar projects?

Feranmi Williams (Lead): First-Class Honours graduate in Classics (University of Ibadan) with a research focus on AI Ethics and Safety.

Track Record: Government Partnership: Successfully collaborated with PLASMIDA (Plateau State Agency) to train over 1,000 students and professionals in AI safety and literacy.

Advocacy: Currently lead a team addressing addiction and sexual exploitation, providing the frontline experience needed to handle sensitive data and victimsupport.

Ethics: My background in Classics provides a unique "Human-Centric" lens for identifying and categorizing subtle moral failures in AI systems.

What are the most likely causes and outcomes if this project fails?

Cause1: Low Adoption/Invisibility: If victims don't know the platform exists, data will be sparse.

Mitigation: We are leveraging existing partnerships with state agencies and youth networks to "seed" the platform's user base.

Cause 2: Technical Security Breach: Given the sensitive nature of the data, a breach would be catastrophic.

Mitigation: We will implement end-to-end encryption for submissions and follow international data protection standards (GDPR/NDPR).

Outcome if Failed: Even if the platform itself sees low volume, the taxonomy and policy research produced will serve as a foundational "bluebook" for other safety watchdogs globally.

How much money have you raised in the last 12 months, and from where?

$0. This is an idea-stage initiative seeking its first "catalytic" grant to prove the concept and establish the legal and technical foundation.

CommentsOffersSimilar3

No comments yet. Sign in to create one!