this looks like a great project!
I will publish 52 video explainers and podcasts about AI Alignment in a year.
Goal is to publish one video explainer or podcast about AI Alignment a week for a year (52 videos total). I will achieve this goal by:
preparing interviews, recording & editing them and publishing the transcripts
reading AI Alignment papers, writing scripts, recording, editing and then uploading explainers
Funding will be used to pay for my salary (including taxes).
For full funding amount ($40,000):
Produce 52 videos
Travel budget for in-person interviews
Budget for outsourcing assistance
For amounts higher than minimum funding ($500):
Will produce videos proportionally to how much funding I get
eg. if I get 50% of the full funding I'll produce 52 / 2 = 26 videos
Primarily remote interviews
I am the only person on the team (cf. my website).
I am on track to publishing approximately 43 video explainers and podcasts in 2024 (published 5 in the past 6 weeks), building on 49 video explainers and podcasts in 2023.
Track record on podcasts:
Guests have included prominent alignment researchers such as Evan Hubinger, Collin Burns, Neel Nanda, Victoria Krakovna, David Krueger.
Most episodes generate 500-1k plays (meaning a download or someone listening to >1m).
Three popular episodes: Joscha Bach (3k plays, 35k views), Robert Miles (1.3k plays, 35k views), and Connor Leahy (1.2k plays, 16k views)
Track record on video explainers :
Published 5 video explainers to date (1-5k views each)
I underestimate the time needed per video for quality/impact, forcing me to reduce volume goals
My time is overcommitted to other personal and life priorities that limit video creation
I get ~$100 / month from Patreon (16 supporters).
Emil Wallner
3 months ago
love the pod! i've been on twice and it was great. you make tricky ai stuff easy to get. the talks are always smart and interesting. hope to join more episodes soon. the inside view is one of the best pods for the ai safety community. keep it up!
Michaël Rubens Trazzi
4 months ago
Main updates since I started the project
- March: published popular video: "2024: the age of AGI"
- April: edited some Ethan Perez interview
- April-May: I record & edit my AI therapist series
- May-June (daily uploads): I published 23 videos, including 8 paper walkthroughs, 4 animated explainers and one collab (highlight). More below.
- August: I record & edit an interview with Owain Evans (will be published in the next few days).
Detailed breakdown of the 23 videos from my daily uploads:
paper walkthroughs
The Full Takeoff Model series (3 episodes)
Anthropic’s scaling monosemanticity
Safety Cases: How to Justify the Safety of Advanced AI Systems
sleeper agent series
walkthrough of the sleeper agent paper
walkthrough of the sleeper agent update
four-part series on sleeper agents
nathan labenz collab
one interview with nathan labenz from cognitive revolution (highlight)
full video includes a crosspost of nathan labenz’s episode with adam gleave
other alignment videos
discussion of previous coding projects related i’ve made about alignment
mental health advice when working on ai risk
take on leopold’s situational awareness
the AI therapist series (about AI's impact)
August: In the next week or so I will be publishing an interview with Owain Evans.
September-?: I'm flying to San Francisco in to meet people working full-time on AI Safety, so I can have a better sense of what research to cover next with more paper walkthroughs, and schedule some interviews there in person.
After that, my main focus will be editing the recorded in-person interviews, recording paper walkthroughs, video explainers, remote interviews, and possibly some more video adapatations of fiction (like this one).
Any additional funding that goes into this grant will directly go into reimbursing my travel expenses in the Bay, and allow me to stay longer and record more in-person interviews.
I am happy to get connected with people to talk to (for an interview, or because they wrote an important paper I should cover).
Kunvar Thaman
10 months ago
I think Michael's podcast (and sometimes "not a podcast") episodes are fantastic. The Inside view (Michael's podcast), Dwarkesh's podcast, and the MLST (Machine learning street talk) are my go-to AI-related podcasts and the selection of guests that Michael brings is impressive, with a good focus on AI-safety researchers.
I've personally listened to a lot of his episodes, almost all released in 2022 and 2023, and I've recommended a fair amount of them to others - they're quite well done (such as the recent episode with Evan Hubinger - really good).
I don't think it's the production quality that's the barrier for youtube views which @Austin points out below, but more to do with marketing, and I'd recommend reaching out to more established people in the podcasting world for figuring that out.
I'm going to donate a token $500 and hope others can help fund Michael more, the work he's doing is great and quite impactful (especially for people in the early stages of an AI safety career).
Austin Chen
10 months ago
I'm donating a token amount for now to signal interest, and get this project across its minimum funding bar. I have not listened much to The Inside View, but the guests Michael have attracted are quite good (eg recently Manifund regrantor @evhub and grantee @Holly_Elmore). The production quality seems pretty good at a glance (with editing & transcripts available). I also really like that Michael has also been consistently doing these for a year; I could imagine wanting to fund this project a lot more, upon further research or testimonials.
My primary misgivings are the low-ish view counts on Youtube, and uncertainty on whether Michael's videos have been helpful for others - this is where more testimonials like Cameron's are helpful!
Cameron Holmes
10 months ago
I've listened/watched most episodes/explainers and I think this is a great project. This really helped introduce me (an outsider) to AI alignment and helped me to understand the state of the space and it's key concepts - beyond adjacent content from Rob Miles. As such I think it could form an important part of the pathway for others into the field.
I've been a Patreon since inception and I'm considering offering funding here as well.