MichelJusten avatar
Michel Justen

@MichelJusten

Incoming AI journalism associate at Tarbell

https://www.linkedin.com/in/michel-justen/
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

Incoming AI journalism associate at Tarbell

Previously:

  • Researched international AI governance with Pivotal Research fellowship 

  • Ran events at CEA, like the Summit on Existential Security (x2) and New Orleans Alignment Workshop

  • Co-founded and scaled EA opportunity board (eaopps.com

  • Studied Neurobiology and Psychology at University of Wisconsin–Madison (3.9 GPA) and interned at USAID and Max Planck Institute for Human Cognitive and Brain Sciences

Links:

Projects

Comments

MichelJusten avatar

Michel Justen

about 1 month ago

Update: Jaime Sevilla said it's fine to use diagrams with proper attribution.

MichelJusten avatar

Michel Justen

about 1 month ago

@RyanKidd Good question. I've asked Epoch just to be safe, although I think it also wouldn't be crazy to just use them and cite them in video and video description, which was my original plan.

I won't be using Situational Awareness diagrams—I used them above just to illustrate mock ups.

I also won't use Planned Obsolesce diagrams—they don't have many diagrams and I just cited them as a source for claims I made above.

MichelJusten avatar

Michel Justen

about 2 months ago

@mdickens Thanks for your comment! I realized this project page didn't mention that I will discuss the risks of rapid AI-accelerated AI progress (and am generally spooked by the possibility) so I've updated some sections to reflect that.

That being said, I don't think this video will speed up AI-accelerated AI R&D. Leading AI labs are already aware of and actively pursuing this direction (see Sam Altman's talking about this, Anthropic's RSP and head of RSP talking about it, Google's research on chip design, etc.). Also, other researchers who are closer to AI labs have already published detailed technical analyses of this dynamic (see Epoch paper on this, Situational Awareness section on intelligence explosion, Ajeya article) .

We're kind of in the worst possible world where frontier labs know about and are pursuing this possibility, but the public and policymakers don't understand it or take it seriously enough. This video aims to make this topic more accessible to the people outside of labs who can take steps to prepare for or mitigate the possibility of very fast AI progress—I don't think it'll reveal anything new to labs.