Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Linnexus AI

Technical AI safetyAI governanceGlobal catastrophic risksGlobal health & development
Lhordz avatar

Feranmi Williams

ProposalGrant
Closes February 12th, 2026
$0raised
$8,000minimum funding
$10,000funding goal

Offer to donate

31 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Linnexus AI is an idea-stage initiative to monitor, document, and mitigate the misuse of generative AI platforms like Grok, Midjourney, and Stable Diffusion in Africa. The project focuses on protecting children, women, and vulnerable communities from AI-generated sexual content, deepfakes, and digital exploitation. Linnexus AI will also advocate for policy reform, provide education on AI safety, and create a reporting portal for victims to safely report abuse.

What are this project's goals? How will you achieve them?

Goals:

  1. Establish Africa’s first AI Safety watchdog to monitor and report AI-enabled sexual abuse and deepfakes.

  2. Influence AI policy at national and continental levels (AU, NITDA, NCC) to enforce platform accountability and protect human dignity.

  3. Build AI literacy and awareness among journalists, schools, parents, faith communities, and law enforcement.

  4. Create a reporting system for victims of AI abuse and coordinate with cybercrime units.

How we would achieve them:

  1. Produce a flagship policy & research brief on “Generative AI and Sexual Exploitation Risks in Africa.”

  2. Draft a minimum viable reporting framework with protocols for victim reporting.

  3. Build strategic partnerships with at least 1 regulator and 1 NGO to support policy advocacy.

  4. Register Linnexus AI as an NGO and assemble an advisory board of AI, law, child protection, and ethics experts.

How will this funding be used?

  1. CAC registration & incorporation for Legal formation of Linnexus AI as a non-profit: $500

  2. Policy brief production for Research, writing, editing, design, publication: $2000

  3. Minimum viable reporting portal to secure platform prototype for AI abuse reporting: $3000

  4. Advisory board consultations & honoraria to engage experts to guide the initiative: $1000

  5. Initial advocacy & partnership engagement for policy dialogue, meetings, and foundational MoUs: $3500

Total: $10,000

Who is on your team? What's your track record on similar projects?

  1. Founder: Feranmi Williams

Experienced in Applied Generative AI literacy, training youth and professionals across Nigeria.
2. Advisory board (planned, not yet formalized):

AI researchers and safety experts

Lawyers in cyber law and human rights

Child protection advocates

Faith and ethics leaders

Tech policy specialists

What are the most likely causes and outcomes if this project fails?

Causes of failure at this stage:

Lack of early-stage funding to establish the initiative

Limited awareness among African regulators about AI risks

Absence of a coordinated reporting and monitoring system

Likely outcomes if it fails:

Continued unmonitored abuse of generative AI in Africa

Victims of AI-generated sexual exploitation left without reporting or recourse

Delay in policy adoption for AI accountability and child protection

Africa missing the opportunity to set its own AI safety and ethics standards

How much money have you raised in the last 12 months, and from where?

$0 raised to date for this project. It is currently at idea stage.

This seed funding request is intended to provide the founder the resources to launch Linnexus AI, produce the first research brief, and set up initial partnerships.

Comments1Offers
Lhordz avatar

Feranmi Williams

about 3 hours ago

Hi everyone,

I’m excited to share Linnexus AI, an idea-stage initiative to establish Africa’s first AI Safety & Digital Dignity Watchdog.

Recent reports, including BBC coverage of Grok AI, have highlighted a worrying trend: some generative AI platforms are enabling users to create illicit sexualized and non-consensual images, including deepfakes. This is a serious threat to children, women, and vulnerable communities, yet Africa currently has no dedicated organization monitoring or responding to these harms.

Linnexus AI aims to fill this critical gap by:

✅Researching & documenting AI misuse across Africa

✅Advocating for policy and platform accountability with the AU, NITDA, NCC, and other regulators

✅Training journalists, schools, churches, parents, and law enforcement to recognize and respond to AI abuse

✅Launching a reporting portal for victims of AI-generated abuse

So, we’re currently at the idea stage and seeking catalytic seed funding to:

✅Produce the first Africa-focused policy & research brief

✅Draft the minimum viable reporting framework

✅Register Linnexus AI as a legal entity

✅Form our advisory board and begin strategic partnerships

Your support can help us take the first step toward a safer digital future for Africa.

Thank you for considering Linnexus AI. I’d be happy to answer any questions or provide more details.