Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

AI Research Communicator: Bridging Papers and Practice (1k+ readers)

Faisal-Moarafur-Rasul avatar

Faisal Moarafur Rasul

ProposalGrant
Closes February 21st, 2026
$0raised
$6,000minimum funding
$12,000funding goal

Offer to donate

31 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

I'm an independent AI Research Communicator who has built an audience of 1,000+ subscribers by providing insights on the latest developments in artificial intelligence and how it works for policymakers, educators, and practitioners.

Here's the problem: AI research moves faster than public understanding. New papers and breakthroughs drop daily, but most stakeholders don't have the time or technical background to figure out what actually matters.

Misinformation fills the space where clear explanations should be shaping decisions. Decision-makers build trillion-dollar frameworks on shallow understanding. And valuable research stays locked in academic papers, never reaching the people who could actually use it.

I'm trying to fix this by building public infrastructure to make AI understanding more comprehensible. As a storyteller and content marketer, I know how to deliver messages so audiences can properly grasp them. My aim has always remained to make information easier to understand and impactful.

To achieve this mission, I have built a structure across three platforms. On Medium, I write in-depth research analysis. On Substack (1,000+ subscribers), I share curated insights using transparent AI-human collaboration. On GitHub, I architect open-source educational tools; the full story and projects live in my portfolio.

Each piece builds on what came before, creating a growing reference library for anyone trying to make sense of AI's rapid development. I also regularly share concise AI updates and reflections on LinkedIn, where I engage with a community of 13,000+ followers.

I’m asking for $12,000 over six months to continue this work full-time and expand its reach. If only $6,000 is available, I would maintain the project at a reduced pace, focusing on the most important AI developments to ensure the work remains active and publicly accessible.

This isn’t a request to bet on an idea. I have already proven that this works, and I’m asking for support to scale what is already creating value.

What are this project's goals? How will you achieve them?

Main Goal

To make the latest AI developments, including research breakthroughs, real-world use cases, and underlying system mechanisms, understandable and usable for non-specialist audiences through clear writing, video content, and live sessions.

What I Will Deliver Over 6 Months

Content Creation

• 48 original Medium articles published twice a week, analyzing new AI research papers, major developments, and their practical implications

• 48 Substack issues published twice a week, offering curated insights and research synthesis with transparent disclosure of AI and human collaboration

• 12 open-source GitHub projects released twice a month, including interactive demos, educational tools, or clearly documented implementations

Growth Targets

• Grow Substack from 1,000 to 4,000 subscribers through organic distribution

• Reach 50,000+ monthly views across all platforms

• Receive 5+ citations in articles, policy discussions, or educational resources

How I Will Achieve This

I already operate a system that works. Each day, I monitor arXiv, major AI labs, and research communities to identify papers and developments worth translating. Once identified, I adapt each insight for different audiences and levels of depth. Technical analysis is published on Medium. Accessible synthesis is shared on Substack. Practical, hands-on tools are built and released on GitHub.

I remain actively engaged with communities to understand which knowledge gaps matter most in practice. Every piece goes through an editorial checklist and a technical accuracy review. AI tools are used to support research and drafting efficiency, but I retain full editorial control and always disclose their use.

How I Will Measure Success

Subscriber growth indicates sustained value. External citations demonstrate impact beyond my immediate audience. Corrections and updates reflect intellectual honesty and responsiveness to new evidence. Most importantly, direct feedback from readers who apply this work to real-world decisions is the clearest signal that the project is succeeding.

How will this funding be used?

Total Ask: $12,000 over 6 months

Here's Where the Money Goes:

a) Research and Production (75%, or $9,000)

This covers my living costs for dedicated full-time work over 6 months, allowing me to focus entirely on creating public value through research translation and educational tools.

b) Infrastructure (12.5%, or $1,500)

The tools I need to do this well:

  • Compute resources for running AI models and testing tools ($400)

  • Research database access, arXiv alerts, paper management ($300)

  • AI tools like ChatGPT, Claude, DeepSeek, and Gemini for research assistance ($400)

  • Hosting and publishing (Medium membership, Substack Pro, GitHub) ($250)

  • Domain, email, professional infrastructure ($150)

c) Distribution and Outreach (8.33%, or $1,000)

Getting the work in front of people who need it:

  • Newsletter tools and growth infrastructure ($300)

  • Community engagement and partnerships ($200)

  • Collaboration opportunities like guest posts or podcast appearances ($300)

  • Social media scheduling and analytics ($200)

d) Contingency (4.17%, or $500)

A buffer for unexpected expenses or strategic opportunities. Maybe there's a conference I should attend. Maybe I need to pay an expert to review something technical. This gives me flexibility.

e) My Transparency Promise:

Every month, I will show you exactly how I spent the money. If something costs way more or way less than expected, I will explain why and show you how I'm adjusting.

Who is on your team? What's your track record on similar projects?

Team: Just Me

I'm a solo operation, which actually has advantages. I maintain complete quality control. I can make decisions quickly. There's no overhead. And you know exactly who's accountable for delivering what I promise.

My Background:

I have spent over a decade doing professional writing, content marketing, and research. I know how to take complex ideas and make them clear. I understand what makes content valuable and how to reach the right audiences. I have learned how to deliver quality work on deadline, which matters when you're committing to twice-weekly publishing.

This isn't me learning on the job. I'm applying skills I have developed over 10+ years to a problem that really needs solving.

What I Have Already Built:

I have grown my Substack to 1,000+ subscribers organically, with no paid promotion. My portfolio is public, so you can judge the quality yourself. My GitHub shows I can build technical tools, not just write about them.

What Makes My Approach Different:

Most conversations about AI that happen in the world hardly come out in public. I'm building in public with complete transparency. And I'm not just writing about responsible AI use. I'm actually demonstrating it by being upfront about when and how I use AI tools in my own work.

Why Trust Me to Execute:

First, I have already proven this works. Those 1,000+ subscribers didn't come from wishful thinking. They came from consistently delivering value. Second, I have a decade of professional experience meeting deadlines and maintaining quality standards. Third, my work is all public and permanent. You can check my track record anytime. And finally, I'm building accountability into the structure with monthly reports and open-source work you can audit.

What I Bring to the Table:

The combination is what matters. I can read and understand difficult AI insights. I have 10+ years of digital communication skills. I know how to synthesize information from multiple sources. And I understand content marketing well enough to make sure valuable work actually reaches people who need it.

What are the most likely causes and outcomes if this project fails?

Let me be honest about what could go wrong.

Scenario 1: I Don't Hit 4,000 Subscribers

Maybe I only reach 2,000 or 3,000 instead. Here's the thing, though: I would rather have 2,000 deeply engaged readers than 10,000 passive ones. If policymakers and practitioners are actually using my work to make decisions, that matters more than vanity metrics. Impact isn't just about numbers.

Scenario 2: Quality Drops Because I'm Pushing Too Hard

This would actually be bad. If I'm rushing to hit my twice-weekly targets and the quality suffers, I'm doing more harm than good. My plan: I will reduce output before I compromise quality. I have editorial checklists and accuracy reviews built in. But if I need to publish less to maintain standards, that's what I will do. I will be transparent about it.

Scenario 3: The AI Landscape Changes Dramatically

Maybe there's a major breakthrough or regulatory shift that makes my planned content less relevant. Honestly, I think rapid change actually increases the need for clear translation. But even if the specific topics shift, the core mission stays the same: making research accessible. I can pivot topics while keeping the methodology consistent.

Scenario 4: I Burn Out or Get Sick

I'm a solo operation with no backup. If something happens to me, everything stops. This is a real risk. My plan: maintain a sustainable work pace, build a content buffer so I can continue publishing during brief gaps, and communicate immediately if health issues come up.

How much money have you raised in the last 12 months, and from where?

Total Raised: $0

I have been funding this entirely myself through personal savings.

Why Haven't I Raised Money Before?

I wanted to prove this worked first. Building to 1,000+ subscribers and maintaining consistent output shows product-market fit before asking anyone to invest. I wanted to understand the real costs and realistic output levels, and I wanted a track record that speaks for itself.

What This Means Now:

This would be my first external support. It would let me focus full-time on this work for the first time ever.

No Conflicts:

I don't have other pending grant applications or institutional affiliations that would create conflicts. If you fund me, I can give this 100% of my working time. No competing obligations beyond basic living expenses.

If I Get Other Funding During These Six Months:

I will tell you immediately. We can adjust the scope or budget if it makes sense. Maybe extend the timeline or increase deliverables if I end up over-funded. I will provide full transparency in monthly reports about any other income related to this work.

CommentsOffersSimilar7
quentin101010 avatar

Quentin Feuillade--Montixi

AI-Powered Knowledge Management System for Alignment Research

Funding to cover the first 4 month and relocating to San Francisco

Science & technologyTechnical AI safety
2
2
$0 raised
akhilpuri avatar

Akhil Puri

Drive Narrative Change To Reduce Global Catastrophic Risk: Reach 1.3M people

Original essays, videos on systemic alternatives to shift the Overton window, build cultural momentum for policies supporting long-term resilience & well-being

Science & technologyAI governanceEA communityGlobal catastrophic risks
4
11
$0 raised
MartinPercy avatar

Martin Percy

The Race to Superintelligence: You Decide

An experimental AI-generated sci-fi film dramatising AI safety choices. Using YT interactivity to get ≈880 conscious AI safety decisions per 1k viewers.

Technical AI safetyAI governanceBiosecurityForecastingGlobal catastrophic risks
1
1
$0 raised
MichelJusten avatar

Michel Justen

Video essay on risks from AI accelerating AI R&D

Help turn the video from an amateur side-project to into an exceptional, animated distillation

AI governanceGlobal catastrophic risks
1
5
$0 raised
JeroenWillems avatar

Jeroen Willems

A Happier World (YouTube channel promoting EA ideas)

A Happier World explores exciting ideas with the potential to radically improve the world. It discusses the most pressing problems and how we can solve them.

EA community
5
8
$2.79K raised
michaeltrazzi avatar

Michaël Rubens Trazzi

Grow An AI Safety Tiktok Channel To Reach Ten Million People

20 Weeks Salary to reach a neglected audience of 10M viewers

Technical AI safetyAI governance
8
35
$29.7K raised
Eko avatar

Marisa Nguyen Olson

Building an AI Accountability Hub

Case Study: Defending OpenAI's Nonprofit Mission

Technical AI safetyAI governanceBiosecurityGlobal catastrophic risks
1
2
$0 raised