Project summary
Linnexus AI is an idea-stage initiative to monitor, document, and mitigate the misuse of generative AI platforms like Grok, Midjourney, and Stable Diffusion in Africa. The project focuses on protecting children, women, and vulnerable communities from AI-generated sexual content, deepfakes, and digital exploitation. Linnexus AI will also advocate for policy reform, provide education on AI safety, and create a reporting portal for victims to safely report abuse.
What are this project's goals? How will you achieve them?
Goals:
Establish Africa’s first AI Safety watchdog to monitor and report AI-enabled sexual abuse and deepfakes.
Influence AI policy at national and continental levels (AU, NITDA, NCC) to enforce platform accountability and protect human dignity.
Build AI literacy and awareness among journalists, schools, parents, faith communities, and law enforcement.
Create a reporting system for victims of AI abuse and coordinate with cybercrime units.
How we would achieve them:
Produce a flagship policy & research brief on “Generative AI and Sexual Exploitation Risks in Africa.”
Draft a minimum viable reporting framework with protocols for victim reporting.
Build strategic partnerships with at least 1 regulator and 1 NGO to support policy advocacy.
Register Linnexus AI as an NGO and assemble an advisory board of AI, law, child protection, and ethics experts.
How will this funding be used?
CAC registration & incorporation for Legal formation of Linnexus AI as a non-profit: $500
Policy brief production for Research, writing, editing, design, publication: $2000
Minimum viable reporting portal to secure platform prototype for AI abuse reporting: $3000
Advisory board consultations & honoraria to engage experts to guide the initiative: $1000
Initial advocacy & partnership engagement for policy dialogue, meetings, and foundational MoUs: $3500
Total: $10,000
Who is on your team? What's your track record on similar projects?
Founder: Feranmi Williams
Experienced in Applied Generative AI literacy, training youth and professionals across Nigeria.
2. Advisory board (planned, not yet formalized):
AI researchers and safety experts
Lawyers in cyber law and human rights
Child protection advocates
Faith and ethics leaders
Tech policy specialists
What are the most likely causes and outcomes if this project fails?
Causes of failure at this stage:
Lack of early-stage funding to establish the initiative
Limited awareness among African regulators about AI risks
Absence of a coordinated reporting and monitoring system
Likely outcomes if it fails:
Continued unmonitored abuse of generative AI in Africa
Victims of AI-generated sexual exploitation left without reporting or recourse
Delay in policy adoption for AI accountability and child protection
Africa missing the opportunity to set its own AI safety and ethics standards
How much money have you raised in the last 12 months, and from where?
$0 raised to date for this project. It is currently at idea stage.
This seed funding request is intended to provide the founder the resources to launch Linnexus AI, produce the first research brief, and set up initial partnerships.