Funding requirements

Reach min funding
Get Manifund approval
1

Video essay on risks from AI accelerating AI R&D

ProposalGrant
Closes December 10th, 2024
$15raised
$1,000minimum funding
$3,000funding goal

Offer to donate

19 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

I aim to create a ~10-minute video essay explaining how labs could use AI systems to accelerate AI R&D and how this could quickly bring about a level of AI progress society seems unprepared for. The video will distill technical work by Epoch AI, Tom Davidson, and expert feedback into an accessible, animated explanation.

Key reasons to fund:

  • AI-accelerated AI R&D is a complicated yet consequential concept. Policymakers and the public should be aware that AIs doing AI research could shorten timelines and takeoff speeds significantly so they can better prepare for this possibility or take actions to mitigate it. The topic deserves clear, visualizable explanations.

  • Adequate funding will enable me to work with skilled animators (hopefully Rational Animations), turning this from an amateur side-project into a well-visualized distillation.

  • I’m well-equipped to make an accurate and engaging video: I’ve previously done AI governance research, ambitious independent distillation projects, and made videos. I’m also well-connected to subject-matter experts who can review the video script. 

  • In addition to the information dissemination value, producing a high-quality video could open up new AI-communication career opportunities and properly test whether video creation/content distillation is a promising career direction.


What are this project's goals? How will you achieve them?

I will publish a YouTube video essay on AIs accelerating AI development—and corresponding risks—by the end of year. 

Goals:

  • Opens up 3 new collaboration, grant, or job opportunities (eg., collaborate with 80k video team or Planned Obsolescence, receive grant interest from LTFF, etc.) 

  • Complemented by 5-subject matter experts or experienced content creators, which I read as a token of excitement about more videos like this. 

  • Receives > 5k views. (I’m not anchoring that much on views since I think a lot of the benefit of this video will come from unlocking new opportunities and helping me understand my interests. By default this video will go on my YouTube (small audience), but if late drafts look promising I will explore ways to publish on an established channel.

How I’ll achieve goals: 

  • Post about video on my twitter, ask a few popular warm-contacts to signal-boost

  • Post about video on my instagram

  • Post about video on my substack

  • Post about video on my linkedIn

  • Post about video on cross-org Slack channels 

  • Politely ask info dissemination mechanisms/platforms/specialists to consider sharing (eg., AI safety newsletters)

  • Send videos to other video producers in the AI safety orbit (eg., 80K, Rational Animations, FLI, etc.) 

  • Send video to experts who have published on this topic (eg., Gaurav Sett, CAIS folks) 

  • Link to this video in future applications and on my resume 


General project steps:

  1. [Done] Outline development and expert feedback 

  2. Script development and expert feedback (ongoing, v1 done by end of this week)

  3. Animation planning and storyboarding (with Rational Animations and freelancers)

  4. Recording and initial video production

  5. Final expert review and refinements

  6. Publication and targeted distribution (by Dec 15)

How will this funding be used?

I’ll use funding to pay people to animate parts of my video.  

When I started this project, I wasn’t planning on asking for funding: it was a hobby project between an AI governance fellowship (Pivotal Research) and an AI journalism job (Tarbell). It’s still a hobby project, but, as I’ve developed the script, I’ve spotted opportunities for helpful visualizations. I need animators to help me create these visuals. 

What are some example animations I’m excited about? 


Animated diagrams to illustrate how each successive iteration of frontier models will automate more and more of the AI R&D workflow, and how these automations will compound because of the sheer number of AI assistants that can do AI R&D (source: EPOCH, Planned Obsolescence


Animated diagrams to map % of AI R&D workflow to timelines, illustrating how the % of AI R&D workflow that can be automated is now non-zero and increasing, and how this could shorten the time until we have enormously capable models. (source: Planned Obsolescence)

Animations to illustrate how a lot of AI progress is driven by algorithmic progress. So even if there are physical bottlenecks on how much compute hardware is available, AIs may still increase effective compute drastically via algorithmic improvements. (I may also cover how AIs can drive progress via post-training enhancements and improved experiment efficiency). (Source: EPOCH, Situational Awareness)

The attached diagrams are mere mock ups. I want to pay an animator to help re-style and re-label them and add smooth motions, within and between diagrams.

What are the most likely causes and outcomes if this project fails?

Failing = not posting any video. Or a video that’s delayed and not visually engaging enough to receive many views. I see little downside risk. 

How much have you accomplished on this project?

I have written an outline that has been reviewed by folks like Fin Moorehouse, Tom Davidson, and IAPS researchers. I’m now half-way through a detailed script which I’ll have ready for expert review by the end of next week. (Email me at michelm.justen@gmail.com if you’d like to see the outline or script draft.)

I expect animations to become a bottleneck in a week, so I’m looking for funding as soon as possible <3

mdickens avatar

Michael Dickens

4 days ago

What is the case for impact for this? At a glance my guess is it would increase x-risk—a better explanation of AI accelerating AI R&D would (marginally) make people want to do that more, which shortens timelines / increases the probability of a fast takeoff.

MichelJusten avatar

Michel Justen

4 days ago

@mdickens Thanks for your comment! I realized this project page didn't mention that I will discuss the risks of rapid AI-accelerated AI progress (and am generally spooked by the possibility) so I've updated some sections to reflect that.

That being said, I don't think this video will speed up AI-accelerated AI R&D. Leading AI labs are already aware of and actively pursuing this direction (see Sam Altman's talking about this, Anthropic's RSP and head of RSP talking about it, Google's research on chip design, etc.). Also, other researchers who are closer to AI labs have already published detailed technical analyses of this dynamic (see Epoch paper on this, Situational Awareness section on intelligence explosion, Ajeya article) .

We're kind of in the worst possible world where frontier labs know about and are pursuing this possibility, but the public and policymakers don't understand it or take it seriously enough. This video aims to make this topic more accessible to the people outside of labs who can take steps to prepare for or mitigate the possibility of very fast AI progress—I don't think it'll reveal anything new to labs.