Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Making AI Safety Understandable to Everyday Audiences

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
RyanCel avatar

Ryan Celimon

ProposalGrant
Closes February 2nd, 2026
$0raised
$500minimum funding
$15,000funding goal

Offer to donate

42 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project Summary

Hey, i'm Ryan, I recently started my YouTube channel to cover AI news, new models, releases, and big announcements. At first, it was mostly a way for me to keep up myself.

But the deeper I dug for scripts, the more something started feeling wrong.

Most AI content out there is either extremely technical and made for people already obsessed, or extremely shallow and treats AI like a fun toy.

Meanwhile, the real issues, misalignment, loss of control, large-scale job disruption, and autonomous weapons, barely reach normal people at all.

I’ve taken this channel seriously from day one. I’ve been publishing 4-5 videos a week, sometimes multiple in a day, to learn fast, test what works, and build a real habit of shipping.

This isn’t a side thing or a short experiment. I’m committing to this for the next 5 to 10 years, with the goal of meaningfully improving how non-technical audiences understand where AI development could realistically lead.

Right now I’m averaging about 150 views per video, posting 4-5 times a week.

Some videos already dip into safety topics like AGI scenarios, autonomous systems, and unintended consequences, but they’re rushed. Limited time means less time for research and basic editing.

This project is me deliberately shifting the channel toward clear, no-BS AI safety content for regular people. Not just the AI crowd, but everyone who’s going to be affected by this technology even if they don’t follow it yet.

---

My goal is to lean into more scenarios, trend aware framing, and tighter visual storytelling to explore AI misalignment and loss of control risks in a way that feels engaging rather than academic.

These kinds of videos tend to perform better with general audiences, but they are significantly more time intensive to produce.

Here’s an early example of that direction:
https://www.youtube.com/watch?v=eRErQu6IiAY

That intro alone took roughly four hours to produce, including scripting, ideation, structure, sound design, editing, and generating custom images and AI video clips. That’s just for a short intro, not a full-length video.

This is the type of content I want to move toward more consistently. It’s more engaging, more memorable, and better suited to reaching people who don’t already follow AI or AI safety.

The main constraint right now isn’t motivation, it’s time and editing/production capacity.

Using thought experiments, scenarios, and fast paced storytelling will pull in people who would never click on a traditional AI safety explainer.

Most viewers scrolling YouTube aren’t thinking about misalignment or containment problems yet, but they will watch a compelling scenario that talks about a simple question:

can we actually control something smarter than us?

This isn’t the only type of content I plan to make.

Alongside these more engaging, narrative-driven videos, I also want to produce slower, more serious pieces that go deeper into real-world risks, alignment failures, governance, and practical implications.

The idea is to use these story-based videos to earn attention, then use that attention to educate.

Other content:

(My channel) - https://www.youtube.com/channel/UC3oK6QYjH9dzjkGxePbf6Sw

https://www.youtube.com/watch?v=b_tn3Bx5LeU

https://www.youtube.com/watch?v=soPLryFY6m4


Funding would let me slow down just enough to go deeper.

Better research.
Better stories.
Better editing.
Better pacing.

The kind of work that might actually change how people see where this is heading.


What are this project’s goals? How will you achieve them?

The main goal is simple.

Get everyday people to understand why AI safety matters, early enough for it to matter.

Specifically, I want to

  • Explain misalignment, AGI, and job loss without jargon

  • Reach beyond AI Twitter and YouTube into normal audiences

  • Make content that lands emotionally, not just intellectually

How I plan to do this

  • Shift to fewer, longer videos, around 10 to 15 minutes, built around one clear idea

  • Use real stories, analogies, and everyday examples instead of abstract theory

  • Keep some shorter news videos for flow and discovery, but frame them through a safety lens

  • Improve pacing, visuals, and structure to improve retention on the video

  • Borrow formats that already work in other niches and apply them to AI safety


How will this funding be used?

I’m asking for 15,000 dollars to cover up to 12 months of work, just enough to focus on producing higher-quality, safety first videos consistently.

Right now I can publish frequently, but the content has to be fast and editing needs to be more basic. This support would let me slow down slightly to increase research depth, storytelling quality, and production value without killing momentum.

The biggest unlock is editing help and better tools. An editor doesn’t just save time. They improve pacing, clarity, and flow.

They help difficult ideas actually land with people who aren’t technical. Combined with stronger visuals and modern AI video tools, the content shifts from information people hear to stories they remember.

How the money will be used:

Research, writing, and production - $9,000
Time to dig deeper, pressure test ideas, and turn complicated topics into clear scripts regular people can understand.

Editing, visuals, and tools - $4,000 dollars
A part time editor, improved visuals, stock footage, light animation, sound design, and AI video tools to help visualize ideas like misalignment or and AGI.

Promotion and experimentation - $2,000 dollars
Thumbnail testing, small scale promotion, and experimenting with formats to learn what actually helps educational AI safety content reach broader audiences, and also doing a few speaking classes to improve delivery.

Without funding, I’ll keep publishing and iterating. With funding, I can increase depth, clarity, and storytelling and give this approach a real chance to reach far more people.


Who is on your team? What’s your track record?

This is a solo project for now. I handle research, scripting, editing, thumbnails, posting, and SEO.

Track record so far

15 videos published in roughly two weeks
Consistent output of 4 to 5 videos per week
Early traction on a brand-new channel that shows good potential.

The strongest signal so far isn’t views yet, but execution. Consistent publishing, rapid iteration, and steady improvement week over week. Funding would help turn that raw output into something more impactful.


What are the most likely causes and outcomes if this project fails?

The main uncertainty is whether this style of AI safety content can consistently reach and hold the attention of non technical audiences at scale.

If it doesn’t work as hoped

  • The project still produces clear explanations and tested formats that others can learn from

  • We gain practical insight into how people respond to different ways of presenting AI safety ideas

  • The downside is limited, mostly time and experimentation cost

The upside is asymmetric. If this approach works, it offers a low cost way to bring a much broader audience into understanding and caring about AI safety earlier than they otherwise would.


How much money have you raised in the last 12 months?

Zero. This would be the first outside support.


Final note

This is an early bet. I’m not promising guaranteed scale or viral success. I’m asking for a proper shot at an approach that could work, at a moment when public understanding is far behind where it likely needs to be.

CommentsOffersSimilar8

There are no bids on this project.