Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
4

AI Safety Communication FRAME Fellowship

Technical AI safetyAI governanceGlobal catastrophic risks
VinSix avatar

Vin Sixsmith

ProposalGrant
Closes February 1st, 2026
$0raised
$1,000minimum funding
$12,500funding goal

Offer to donate

19 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project Summary

I’ve been accepted into the FRAME Fellowship, an 8-week, full-time, in-person program in San Francisco. It focuses on improving how AI safety and AI risk are communicated to wider audiences. The fellowship is presented by Manifund, Mox and Explainable, and runs in partnership with CivAI, ControlAI and EO. I’m raising funds to cover travel, accommodation and basic living costs so I can attend the fellowship and work on this full-time. The fellowship starts at the end of January, so the funding timeline is fairly tight.

AI systems are advancing quickly, while public understanding of their risks remains very limited. FRAME addresses this gap by bringing together creators, mentors and peers to produce high-quality content that helps people better understand what’s at stake and why it matters.

During the fellowship, I’ll be working alongside other creators and learning from experienced communicators in the AI safety space through mentorship, content sprints and regular feedback. The goal is to develop and publish content that makes AI safety ideas more accessible and easier to engage with for non-expert audiences.

Funding this project supports direct work on one of the key bottlenecks in AI safety today: Translating important ideas from inside the field into forms that can reach and inform the broader public.

Why does this approach work? 

Right now, a lot of strong AI safety work exists, but much of it stays inside a relatively small group of experts. The ideas and effort are there, but a big bottleneck seems to be communication. Many people who will be affected by advanced AI are either not aware of these risks at all or only have a very shallow understanding of what they actually involve.

Good communication doesn’t solve AI safety on its own, but it plays a real role in building awareness & understanding and influencing how seriously these risks are taken over time. If more people are aware of and understand what’s at stake, it becomes easier for better decisions to happen downstream.

The FRAME Fellowship is designed to work directly on this bottleneck. It gives creators the time and structure to focus full-time on AI safety content and provides close feedback from people who already do this well. Instead of everyone working in isolation, the fellowship creates an environment where ideas, feedback and results compound quickly.

My track record

2020–2023: Grew 25+ social media pages across YouTube, TikTok and Instagram, with multiple pages reaching 50,000+ followers and tens of millions of total views.

2023–2025: Worked on building longer-term brands, focusing more on trust, clarity, and connection rather than just views.

2025: Created a long-form video about AI risk and the AGI race, which won the grand prize in the Future of Life Institute’s Keep the Future Human Creative Contest.

Team & mentors

In the FRAME Fellowship I’ll be working alongside other fellows who are actively creating content in the AI safety space.

The fellowship also provides direct mentorship and feedback from experienced AI safety communicators, including:

  • AI in Context (324K YouTube subscribers)

  • Rob Miles (168K YouTube subscribers)

  • Species (257K YouTube subscribers)

  • Siliconversations (99K YouTube subscribers)


Funding

The funding will be used to cover the basic costs required for me to attend the FRAME Fellowship starting in January in San Francisco full-time. This includes flights (Netherlands ↔ San Fransisco), accommodation, food, local transportation and basic living expenses during the eight-week program.

I’m looking for a minimum of $1,000 and a target of $12,500.

Any amount helps reduce the financial pressure of attending the fellowship and makes it more feasible to go. Even partial funding meaningfully supports travel and living costs and increases the likelihood that I can attend.

Reaching the full amount would allow me to commit to the fellowship full-time, without needing to worry about finances. This would let me focus entirely on producing high-quality AI safety content during the program and get the most out of the mentorship and collaboration.

Other funding I have applied for:

  • Coefficient Giving - Career development and transition fund

  • EA Funds - Long-Term Future Fund

  • Future of Life Institute - Digital Media Accelerator

If you happen to know someone working at any of these organizations, or are involved with them yourself, I’d really appreciate it if you could flag my application to them. Given that the fellowship starts at the end of January, the timeline is fairly tight, and an earlier review could make a meaningful difference.






Comments1OffersSimilar7
🥑

Gaetan Selle

about 14 hours ago

I’m in the same FRAME Fellowship cohort and wanted to add a quick +1. I’m excited to see Vin in the programme. He’s got a strong track record in online content and a clear approach to communicating AI risk. If you think AI safety comms is a key bottleneck, helping cover these attendance costs is a straightforward way to back it.