Offering a token of appreciation since I learned a lot reading 80k's blog, listening to Rob Wiblin or doing a coaching call. Same as Neel, donating a small amount as token of appreciation, since this is already a large org,
@michaeltrazzi
video explainers and interviews about AI Alignment
theinsideview.ai$0 in pending offers
Michaël Rubens Trazzi
about 1 month ago
Offering a token of appreciation since I learned a lot reading 80k's blog, listening to Rob Wiblin or doing a coaching call. Same as Neel, donating a small amount as token of appreciation, since this is already a large org,
Michaël Rubens Trazzi
about 1 month ago
Token of appreciation since I have personally benefited from Lightcone's work when visiting Berkeley.
Michaël Rubens Trazzi
about 1 month ago
A lot of people I interact with regularly have done the MATS program and received a lot of value from it.
Similar to Neel below I want to give a token of appreciation, voting for the quadratic funding, given that MATS is a large funding opportunity.
Michaël Rubens Trazzi
about 1 month ago
update: I've now thought more about it, watched more videos, and the videos seem to be getting good engagement and liron has been posting these quite regularly, demonstrating his willingness to grow.
I think the "reacting to video" aspect is a niche that is not currently filled and is worth doing.
I don't especially like focusing too much on basic arguments regarding "optimizers" or "paperclips" (mentioned in the Shapiro and Bret Weinstein videos I watched) and would prefer more recent terminology (say discussing things specific to language models / SoTA and how the current paradigm could become risky), but I want to donate some amount now to show some support, possibly adding more later as I get more evidence.
Michaël Rubens Trazzi
about 1 month ago
@GauravYadav This looks much better, thanks.
I've now watched the two videos you linked in full and gave feedback through the appropriate channels.
Offering a small amount for now to signal support. Could imagine doing a bigger donation, especially if more donations happen (to bring the project closer to minimum funding), or if there's new evidence that comes up (say you publish a new video before the "24 days left to contribute" that I think is especially promising).
Michaël Rubens Trazzi
about 1 month ago
Note: donating a small amount now to signal support, might add more later when I have more clarity on how I want to spread my donations.
I think a world in which no advocacy for an AI Pause would be worse so I am glad that they exist and organize protests.
Main reasons I'm excited
- I think 2.1k / volunteer is quite cheap for the amount of work a volunteer could do a year
- I've interviewed Holly Elmore from PauseAI US who I overall trust with the project
- I think their discord is already quite active, and the 2k "members" (discord members?) and 100 registered volunteers show some decent engagement
Some reservations
- When I look at the protest lists (https://pauseai.info/protests), it seems that there hasn't been that many protests in 2024 organized by PauseAI.
- I don't think any PauseAI protest has reached enough participation to have say 100+ people showing up at one location, which I think would be more impressive than what I've currently seen
Overall, I'm glad that PauseAI exists and appreciate all of the efforts that are being done in that space.
Michaël Rubens Trazzi
about 1 month ago
@liron Thanks for the quick & detailed answers.
Since I wrote that comment I've watched the beginning of most of your current videos, and I'd like to add a few updates for me:
I did look at the engagement (comments & likes) on current videos and it does seem like a positive signal for having a strong core audience. I guess to get the full picture it would be good to have the number of dislikes to see if it's just that the video topics are controversial, but from the comments it seems people do seem to value your work, which is again a good sign.
it seems that most of the content is "Liron react" rather than "Doom debate", meaning you're reacting to what someone said instead of debating. I think debates are harder to do since you'd need to have people agree to debate you and schedule it, and it would also be more interesting to watch. But reacts are things that don't exist in the space so fill the "commentary" niche and is possibly a way to get a debate afterwards.
the production quality is quite good on your side, though say when you're reacting to david shapiro he his only inside a circle (i guess for copyright issues?) which I think could slightly be improved
Right now, given your channel is quite new and the engagement, I think I've updated towards the potential growth you're described being possible.
I guess last conflicted thoughts I'd have are on the theory of change. If say more david shapiro folks watch your reacts / debates, or more generally people do understand the basic arguments for why "doom" is likely, what do you expect them to do as a result?
My understanding is that your audience would most likely not be that technical, so the goal is potentially to have people more concerned about AI risk to say shift the public opinion on AI legislation, or do call to actions for the PauseAI movement? If so would be good to have data on that (eg a form on how much their perspectives have shifted or how many PauseAI discord signups from your channel).
Anyway, thanks again for the thorough answer. I'll try watching a full Liron react and potentially see your final reply to this before coming up with a donation decision.
Michaël Rubens Trazzi
about 1 month ago
I have watched some of the first two initial videos and I think this is really promising.
Responsible Scaling Policies and SB-1047 are two things that I think would be great to have an easy-to-digest video format and I am glad Gaurav already did some of the work with his two initial videos.
Some reservations I have:
- 1. I think at this stage your setup looks fine and I'd be more interested in seeing what kind of results you get without a fancy setup than say spending $ on expensive audio / camera equipment.
- 2. More generally, if I donate say $50 I'd prefer this $50 to go directly to one hour of your time than say contribute for a $1k camera or similar. Today our phones are generally as good or better than $1k cameras.
- 3. However, I could see a version of this where you could break down exactly what kind of cheap green screen + light + software you'd want to use, and say like "I really think that I'd need $300 to buy this $200 light, $40 green screen, and subscribe to Adobe Premiere Pro for 3 months ($20 * 3) which would make my life really easier as a friend of mine could teach me Premiere Pro."
- 4. I think the goals in "What are this project's goals? How will you achieve them?" are not quantifiable as is.
Example of quantiable goals:
"""
My goal is to try to see if I could produce valuable explainers on AI Governance. I plan to make one video on compute governance, one video on the EU AI Act, and one final video on another legislative update as it occurs, or if nothing important happens I'd be default talk about [Insert video idea you want to do anyway]. If by [date] I don't get [X people telling me they found it more than 8/10 valuable] or [Y views] then I'd consider I am not a good fit.].
"""
I should say that really the two videos were quite engaging and I ended up watching a good chunk (few minutes), and that overall this looks quite promising. Simply looking for more clarity on the precise needs & concrete plan.
Michaël Rubens Trazzi
about 1 month ago
I think communicating AI risk through debate with a podcast & youtube channel is valuable. I have seen some episodes of Liron debating with people I'd qualify as "tech optimists" (George Hotz, Theo Jaffe, Guillaume Verdon) and I can confirm that he has the "stamina" and patience to go through these debates (as he mentions here).
My current main reservations are:
- framing it as a "doom debate" or a confrontaional "doomers vs non doomers" which could potentially cause harm (making the movement be seen more like an advesarial apocalyptic cult rather than a technical field)
- one of his debate with Guillaume Verdon (around his startup) might have been counterproductive (he repeated the same question about "inputs / outputs", which I think was a fair question, but this didn't signal to the audience that he was engaging with the actual plan from Guillaume, and was criticized on twitter for that).
A few clarifying questions on the actual text:
Just two months into the project, we're already getting 1k+ views and listens per episode.
How many listens are you getting in the "views and listens"? Is it 1k+ views + listens in average? What is, say, the average watchtime per episode? What is the average view duration (in %)?
We're now at the point where we can convert money into faster audience growth using obvious low-hanging-fruit growth tactics.
What are these "obvious" low-hanging fruit growth tactics?
Plus, the topic of AI risk is growing increasingly mainstream. So I bet the potential audience who wants to consume my content about AI x-risk really is in the millions.
How did you end up with "in the millions" here?
The viewers-per-episode metric will soon hit tens of thousands
What data supports that? What do you mean by soon?
The most likely failure mode is if my content doesn't resonate with a large audience, making it hard to get beyond 25k+ views and listens per episode, such that I give up on the 250k+ views per episode which I currently think is a realistic long-term goal.
When will you decide whether you're in that failure mode or not? Why did you decide on 250k+ views episode as a long-term goal? What makes you think it is a "realistic long-term goal", and what do you mean by "long-term" here?
===
Final note: I think Liron is doing hard work in the space and I think this has the potential to be turned into something impactful. The previous questions were simply to get more clarity on some of the numbers that were mentioned before, and the potential pitfalls I've mentioned in my reservations. I think Liron has proven to have the consistency and patience to hold these debates regularly, and I overall encourage the project, assuming some of the reservations I have are addressed in the future.
Michaël Rubens Trazzi
about 1 month ago
Main updates since I started the project
- March: published popular video: "2024: the age of AGI"
- April: edited some Ethan Perez interview
- April-May: I record & edit my AI therapist series
- May-June (daily uploads): I published 23 videos, including 8 paper walkthroughs, 4 animated explainers and one collab (highlight). More below.
- August: I record & edit an interview with Owain Evans (will be published in the next few days).
Detailed breakdown of the 23 videos from my daily uploads:
paper walkthroughs
The Full Takeoff Model series (3 episodes)
Anthropic’s scaling monosemanticity
Safety Cases: How to Justify the Safety of Advanced AI Systems
sleeper agent series
walkthrough of the sleeper agent paper
walkthrough of the sleeper agent update
four-part series on sleeper agents
nathan labenz collab
one interview with nathan labenz from cognitive revolution (highlight)
full video includes a crosspost of nathan labenz’s episode with adam gleave
other alignment videos
discussion of previous coding projects related i’ve made about alignment
mental health advice when working on ai risk
take on leopold’s situational awareness
the AI therapist series (about AI's impact)
August: In the next week or so I will be publishing an interview with Owain Evans.
September-?: I'm flying to San Francisco in to meet people working full-time on AI Safety, so I can have a better sense of what research to cover next with more paper walkthroughs, and schedule some interviews there in person.
After that, my main focus will be editing the recorded in-person interviews, recording paper walkthroughs, video explainers, remote interviews, and possibly some more video adapatations of fiction (like this one).
Any additional funding that goes into this grant will directly go into reimbursing my travel expenses in the Bay, and allow me to stay longer and record more in-person interviews.
I am happy to get connected with people to talk to (for an interview, or because they wrote an important paper I should cover).
For | Date | Type | Amount |
---|---|---|---|
80,000 Hours | 3 days ago | project donation | 10 |
Doom Debates - Podcast & debate show to help AI x-risk discourse go mainstream | 20 days ago | project donation | 20 |
AI Governance YouTube Channel | 20 days ago | project donation | 40 |
PauseAI local communities - volunteer stipends | 20 days ago | project donation | 10 |
Making 52 AI Alignment Video Explainers and Podcasts | 20 days ago | project donation | +331 |
Making 52 AI Alignment Video Explainers and Podcasts | 22 days ago | project donation | 500 |
Making 52 AI Alignment Video Explainers and Podcasts | 23 days ago | project donation | 500 |
Making 52 AI Alignment Video Explainers and Podcasts | 28 days ago | project donation | 500 |
Making 52 AI Alignment Video Explainers and Podcasts | 30 days ago | project donation | +50 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 month ago | project donation | +11 |
Lightcone Infrastructure | about 1 month ago | project donation | 10 |
MATS Program | about 1 month ago | project donation | 10 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 month ago | project donation | +50 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 month ago | project donation | +50 |
Manifund Bank | about 1 month ago | deposit | +600 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 month ago | project donation | +50 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 month ago | project donation | +50 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 month ago | project donation | +385 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 month ago | project donation | +600 |
Making 52 AI Alignment Video Explainers and Podcasts | about 1 month ago | project donation | +10 |
Manifund Bank | 5 months ago | withdraw | 150 |
Making 52 AI Alignment Video Explainers and Podcasts | 5 months ago | project donation | +150 |
Manifund Bank | 6 months ago | withdraw | 8022 |
Making 52 AI Alignment Video Explainers and Podcasts | 7 months ago | project donation | +5000 |
Making 52 AI Alignment Video Explainers and Podcasts | 7 months ago | project donation | +500 |
Making 52 AI Alignment Video Explainers and Podcasts | 8 months ago | project donation | +10 |
Making 52 AI Alignment Video Explainers and Podcasts | 8 months ago | project donation | +500 |
Making 52 AI Alignment Video Explainers and Podcasts | 8 months ago | project donation | +1942 |
Making 52 AI Alignment Video Explainers and Podcasts | 8 months ago | project donation | +20 |
Making 52 AI Alignment Video Explainers and Podcasts | 8 months ago | project donation | +50 |