13

Holly Elmore organizing people for a frontier AI moratorium

CompleteGrant
$5,310raised

Project summary

I left a full-time job to focus on figuring out the space for slowing or stopping frontier AI development. This money would be for me to get oriented, provide support in the space, and possibly found an org in the next six months.

I want funding to be an organizer in this space. to organize a people movement for an AGI development moratorium. This is the most neglected angle of AI x-risk regulation. The actors already in the space are invested in different strategies and are generally reluctant to engage a popular audience with requests for action. Some are ideologically opposed to popular messages because of their lack of nuance. I believe the people not only can be engaged responsibly on this issue, but must be if we are to have any hope of getting legislation through to slow or stop capabilities progress. AI safety doesn't yet have a class of people like the average environmentalist-- not experts, but people that have a grasp of the high-level issues and form the backbone of the orgs and voter blocs that give environmentalism its political power. Alignment is not a topic I want to hand to that group, but "pause until it's safe" is a simpler, more fail-safe message that I feel good about spreading to voters and politicians. By staying connected to groups involved in "inside game" efforts, I hope to provide the sorts of popular support that are most helpful to policy efforts.

I'm asking for 6 months salary (Bay Area cost of living) plus travel expenses and miscellaneous event and supply expenses. With this, I want to provide community infrastructure and support for the various AI x-risk regulatory efforts, particularly in the Bay Area, help them to coordinate, pursue public outreach to shift the Overton window in favor of a moratorium on frontier AI projects, and help create a base of medium-information voters dedicated to the moratorium position.

Project goals

Using the environmentalist and anti-nuclear proliferation movements as a guide, I am exploring options for activities that cultivate the medium-information voter, from a forum dedicated to moratorium to writing for the press to in-person demonstrations that get media attention to local political action to (for lack of a better word) "stunts" like suing AI labs for building weapons of mass destruction. By staying close to the inside game political efforts, I hope to be able to provide the kinds of pressure or permission that are needed for policymakers to move in favor of shutting it down.

My desired impact on the world is to shift the Overton window in favor of moratorium, reframing the issue from "do we have the right to interfere with AI progress?" to "AI labs have to show us their product will be safe before we allow them to continue". Because taking the argument for moratorium to the public is still highly neglected, and because the moratorium position is rather safe and robust (not easily confused for advocating harmful actions), I think that even my first flailing steps in this direction will be helpful.

I desire this shift in the public frame because it will lead to actions taken to stop AGI. Politicians respond to voter opinions. Right now, the AI labs have much easier access to politicians than we do, but by harnessing and increasing the already high public support for moratorium policies, we have another path to influence, one I do not believe the AI companies could replicate. There is already astonishingly high public support for an AI pause, and politicians take that kind of public support seriously. Ultimately, my goal is to create the public pressure necessary to get laws passed to slow or stop AGI development.

(As part of staying connected to other policy efforts, I expect to be providing general aid and capacity to many others in the space in the people organizer role. I expect to have direct impacts through that work as well, though money received from Manifund would not go to funding political campaigns or lobbying directly.)

How will this funding be used?


6-month salary, Bay Area cost-of-living: $50,000 (incuding ~$10,000 self-employment tax)

Travel: $10,000

Events, transportation, bureaucracy fees, supplies: $10,000

20% buffer: $15,000

Total: $85,000

What is your (team's) track record on similar projects?


My most relevant experience is an organizer at Harvard EA from 2014-2020. I attempted many successful projects, including a podcast and a career advising fellowship, and many projects that fizzled out, including trying to partner up with Harvard's practical ethics center. Overall, I think my most important impact was just being there to advise people on their own individual trajectories (I got Chris Bakerlee into biosecurity, for instance) and for when important opportunities arose, such as when CEA needed someone to liase with Harvard administration for EAG Boston. I think others at Harvard EA would be most grateful for my steadiness and willingness to stay on top of paperwork and activities fairs to keep the various Harvard EA organizations alive even when programming was light. I see myself in a similar anchor role for the moratorium movement.

I performed a generalist role (in addition to doing my own research) in several labs I was part of in my previous life as an evolutionary biologist. In labs that didn't have technicians or managers, I performed tasks such as ordering, shipping, scheduling safety evaluations, updating software and equipment, and ordering repairs. Just figuring out what needed doing was the biggest part of the job, and the thing that reminds me most of the task at hand organizing popular sentiments in favor of AI x-risk regulation.

My PhD at Harvard was undertaken very independently and in the face of multiple external setbacks (two advisors left unexpectedly, causing me to have to change labs and projects, and later on I was very sick for a year). I did not receive much guidance and was disappointed by how much of the program in my department seemed not to be subject matter learning, but sinking or swimming in academia and within the hidden norms of the field. By the end, I was uncertain that I wanted to continue in academia, so I convinced my committee to accept a "minimal dissertation" so I could leave sooner and still get the degree. You could count this as a failure at what I set out to achieve or as an appropriate response to changing circumstances. I might have been better off quitting so I could get to work on something new sooner, but I thought the authority of the PhD would be useful, and so far it has seemed to be helpful. I think being "Dr. Elmore" is likely going to be helpful in organizing a people movement where I interface with lots of new people and need symbols of credibility.

At Rethink Priorities, I researched Wild Animal Welfare. I discovered that the most productive direction for the research was more sociological and journalistic than biological, which was not anticipated and not what I was trained for. Nonetheless, I think I figured it out and produced some good thinking about how to implement the most palatable wild animal welfare interventions. (However, I also found through my time in this job that I didn't believe WAW was a promising cause, so when pausing AGI progress burst into the Overton window and I felt really inspired, I was ready to pivot.)

I am a published scientific author with an h-index of 3 (which is not bad for an entering assistant professor) and a well-regarded popular writer. I have maintained a blog since 2016, where I occasionally share thoughts about organizing. I even won one of the EA Forum's Decade in Review Prizes for this essay. I foresee a lot of expository and persuasive writing in this project and I feel well-qualified to handle it.

What I lack the most is expertise about ML and AI Safety. I don't think that that is necessary to do the people organizing well, since (hardware overhang issue perhaps notwithstanding) I don't believe the issue of whether or not to wait to develop AGI hinges on the details. But I am working hard to rectify my ignorance and believe that there is no better way to get caught up than being on the ground trying to solve actual problems. I fortunately also have access to the world's leading experts on AI Safety when I am out of my depth.

How could this project be actively harmful?


The most serious technical objection to a moratorium is hardware overhang. If a moratorium stopped compute scaling but did not stop algorithmic progress, there could be unexpected outcomes when the more powerful algorithms were eventually applied train models with more compute, perhaps overshooting our ability to respond. Although I take this possibility seriously, I don't see an overall better solution than moratorium at this time, as I do not believe we are on track to solve the alignment problem without buying more time. Although policing algorithmic progress is a harder technical challenge, it is not impossible, and will much more conceivable as the basic idea of a moratorium becomes more mainstream.

Some people think that advocating for more "extreme" asks will harm the chances of moderate asks being adopted, but this is not my model of social change. When a flank is willing to push on the Overton window, it generally makes comparatively moderate proposals seem more reasonable.

Others have hesitated to appeal to the public with the issue of AI x-risk because of the possible corrosive effects of "public relations" thinking on our epistemics. It is possible to be very honest and upfront when presenting the moratorium issue, which is why I consider public outreach promising here, but not on the more complex topic of alignment. But there may still be many toxic effects of bringing this issue into the political realm.

What other funding is this person or project getting?

I received a one time gift from Greg Colbourn to cover two months of expenses, which I am currently using. I have applied elsewhere to cover the same 6-month time period when that money is gone, but so far have no other funders. I will update if that changes. I've also launched a GoFundMe (https://www.gofundme.com/f/pause-artificial-general-intelligence) to see how well that goes.

Change log:

  • Edited 7/7/23 to propose only activities that can be funded by a 501(c)(3) organization. However, some of the activities still listed might potentially count against Manifund’s allowable “portion” of lobbying activities (as described here: https://nonprofitquarterly.org/advocacy-lobbying-501c3-charitable/ ).

  • 7/8/23: corrected year range I organized at Harvard EA. I began my degree in 2013, but didn’t begin organizing Harvard EA until 2014.

  • 7/10/23: Added Greg's name and the GoFundMe to Other Funding section.

Holly_Elmore avatar

Holly Elmore

4 months ago

Final report

Description of subprojects and results, including major changes from the original proposal

This project is coming to a close because I am no longer organizing independently! These donations helped me to start PauseAI US as a 501(c)(3) and PauseAI US Action Fund as a 501(c)(4). I will be making a new Manifund project to reflect our current situation and needs.

Spending breakdown

Almost precisely the same amount I raised here ended up going to Manifund as their cut of our income as PauseAI US's fiscal sponsor.

mdickens avatar

Michael Dickens

4 months ago

@Holly_Elmore When do you expect the new project to be up? I want to donate, ideally before Manifund's EA Community Choice ends on Sept 3. (I also intend to donate some non-Community Choice money but that can wait.)

Holly_Elmore avatar

Holly Elmore

4 months ago

@mdickens I was hoping to finish it tomorrow but no later than next Friday, 8/30. Thank you for your support!

Holly_Elmore avatar

Holly Elmore

about 2 months ago

@Holly_Elmore @mdickens Oh my gosh I cringe to read my overconfidence above, but there is a place to donate to what is now PauseAI US on Manifund finally and I hope you are still interested in donating because we need it! https://manifund.org/projects/pauseai-us-2025-through-q2

Holly_Elmore avatar

Holly Elmore

about 1 month ago


@mdickens PauseAI US is a different organization, and that is for different programming (volunteer stipends, which the US org does not offer).

Holly_Elmore avatar

Holly Elmore

about 1 month ago

That proposal was from Joep at PauseAI Global, which is mostly digital and covers countries that don't have their own dedicated PauseAI org. PauseAI US is the US org which is focused on protesting and lobbying in the US.

mdickens avatar

Michael Dickens

about 1 month ago

@Holly_Elmore I'm not sure I understand. Are they the same org structure but different legal entities? Does PauseAI operate outside the US? The website doesn't seem to make a distinction between PauseAI and PauseAI US.

Holly_Elmore avatar

Holly Elmore

about 1 month ago

@mdickens They are completely different legal entities and under separate management but the same brand. PauseAI.info is PauseAI Global and used to have some PauseAI US stuff. PauseAI-US.org (under construction) will be the (long overdue) US website.

mdickens avatar

Michael Dickens

about 1 month ago

I still have questions but I will resume this discussion on the new post so it's more visible

Holly_Elmore avatar

Holly Elmore

10 months ago

Progress update

What progress have you made since your last update?

This project has grown into PauseAI US, which is currently incorporating as a 501(c)(3) election h and 501(c)(4).

What are your next steps?

The org is officially recognized as a nonprofit and Holly is the Executive Director. Holly is still working to obtain a fiscal sponsor so that the org can receive tax exempt donations right away, pursuing tax exempt status in our own right, and forming the c4 so that we will be ready to do lobbying as that becomes more possible. The org will be hiring later this year, with the most likely first hires being Program Manager, Administrative Assistant, and Community Organizers.

Is there anything others could help you with?

1. Help me to obtain a fiscal sponsor with favorable terms. I have my own bookkeeping and will end the relationship when I have my own tax exempt status, which I am pursuing immediately. I would like a sponsor that added to the org's credibility, so a group reputable on AI Safety or science would be ideal. I would like not to pay a huge amount and have the sponsor trust my bookkeeper rather than having to duplicate all my accounting.
2. Recommend potential hires and skilled volunteers.
3. Offer storage space for protest materials in the Berkeley area.
4. Donate money! I have a lot of bills associated with forming the org and I'll soon have payroll tax and worker's comp etc. The budget my board approved is around $370,000 for this fiscal year (01/01/24-12/31/24) and I'm around $250,000 short. (However, donors will probably want to wait until I have a fiscal sponsor so it is tax exempt.) And I could use more money to hire more people. Facilitating donor relations would be much appreciated.

donated $2,500
Austin avatar

Austin Chen

8 months ago

@Holly_Elmore Hey Holly! Thanks for the update; sorry to be catching you a bit late, but have you found a fiscal sponsor yet?

I'm not sure what degree of support you're looking for from a sponsor, but Manifund is generally happy to be a lightweight fiscal sponsor -- basically, accept donations via 501c3, then forwarding funds to the grantees. I believe we have the lowest sponsorship fees in the industry at 5%, though we also provide no additional services; you're on your own for payroll, taxes, etc. Unsure if we ourselves have the "credibility" you are looking for, though sometimes we get to borrow on the credibility of our AI Safety regrantors or partners like Scott Alexander. Let me know if you're interested, you can also reach me at austin@manifund.org!

donated $100
NathanYoung avatar

Nathan Young

over 1 year ago

It seems plausible to me that this could be really effective in giving some cover for slowing down. Good negotiation requires alternatives if negotiation breaks down. Current the AI orgs are playing ball but I want some pushback if they don't. So I'm putting my money where my mouth is.

Rachel avatar

Rachel Weinberg

over 1 year ago

Approving this project! As a 501c3 we can fund only a small amount of lobbying, and we're excited to put some of that towards an AI moratorium. This does seem like the type of thing that could have downside risk but the upside is also massive and Holly seems to have thought really carefully about the risks and isn't planning to do huge things just yet.

Note that Holly signed a different grant agreement from the default one so she didn't have to agree not to try to influence legislation.

Holly_Elmore avatar

Holly Elmore

over 1 year ago

Yay!

donated $10
GauravYadav avatar

Gaurav Yadav

over 1 year ago

I am making a bet (though a very small one) that this ends up having a positive EV. I’ve spent more time thinking about the role advocacy can play in pushing timelines away, and I’d place a 60% chance (medium error bars at the moment) that Holly’s efforts to try and push for regulatory measures through advocacy will end up buying more time for alignment researchers.

Currently, I am fairly optimistic that this work can get us to a ‘Risk Awareness Moment’ (https://forum.effectivealtruism.org/posts/L8GjzvRYA9g9ox2nP/prospects-for-ai-safety-agreements-between-countries) such that pushing now for regulations ends up working out really well.

"My desired impact on the world is to shift the Overton window in favour of a moratorium, reframing the issue from 'do we have the right to interfere with AI progress?' to 'AI labs have to show us their product will be safe before we allow them to continue.'"
- This seems to me like a good reframing. Though I am unsure why or how the framing currently is that we can’t interfere with AI progress. Regardless, I think trying to get labs to show us a level of interpretability before they can continue seems good!

A few reasons why moratorium or advocacy efforts might end up being negative EV (this is more a comment on the idea of a moratorium itself and not on Holly):

  • Efforts to regulate labs could end up accelerating timelines. I don’t know how feasible this actually is, but in my mind, it goes something like: "Oh, they’re trying to regulate us; better speed up progress to TAI so we can reap the benefits."

  • There might not be enough interest within Congress to change things on AI, or they might end up crafting policies that don’t actually tackle the x-risks parts from AI. In some fashion, this ended up happening with the EU AI Act in its early days, if I remember correctly. I must note that I have very little context or understanding of how the US system works

I think this proposal lacks specific proposals or laws that might get pushed for. Are we thinking of compute regulations like ‘What does it take to catch a Chinchilla?’ Are we thinking of having laws in place that allow audits or inspections to take place?

Holly_Elmore avatar

Holly Elmore

over 1 year ago

I think this proposal lacks specific proposals or laws that might get pushed for. Are we thinking of compute regulations like ‘What does it take to catch a Chinchilla?’ Are we thinking of having laws in place that allow audits or inspections to take place?

This is due in part to me being in an exploratory phase and in part to the fact that my aim is change opinion and framing rather than try to popularize specific policies. The capital that you gain by advocating very general messages like "Pause AI until it's safe to proceed" can be applied to supporting specific policies as the opportunity arises like "support Prop X to require licensing at multiple points of the training and development pipeline". I don't want to be too specific from the get-go because 1) it makes it harder to spread your message and the underlying logic of it, and 2) because we can't control the exact legislative opportunities we will end up having for the public to support.

Thanks for your support!

donated $2,500
joel_bkr avatar

Joel Becker

over 1 year ago

I made Holly an offer of $2.5k yesterday. This won't necessarily be the last offer I make to this project; I'm giving a smaller amount in order to get money out the door more quickly and to provide a credible signal to other regrantors.

Here's my reasoning about this grant as of the offer:


1) Overall, I feel excited about this proposal on the meta level.

  • As per my profile, I want to make offers to "grants that might otherwise fall through the gaps. (E.g. [...] politically challenging for other funders [...].)" My impression is that this proposal is a great example of that.

  • Also as per my profile, I want to "[o]ccasionally make small regrants as credible signals of interest rather than endorsements. (To improve [...] funder-chicken dynamics.)" I think that this application has some funder-chicken going on -- regrantors seem to be positive, but it's a slightly nervey proposal to give to. (Maybe that's for good reason -- there's more downside potential here -- but I don't feel like that's what's driving concerns, since I (and others) might expect any significant downside to come after the period in which Holly tests her fit.)

  • I often see Holly (online) say interesting and reasonable things that go against her crowd. (E.g. XYZ.) I'm interested in what she'll have to say about AI.

  • I have the weak (positive) impression that Harvard EA became noticeably less well-organized after Holly left, and an even weaker (positive) impression that this was due to Holly being a great organizer (rather than pandemic, or being Holly's fault for not enabling a strong successor, or something else).

  • Unfortunately, unlike for most other grants I will make, I don't think I'll be able to get helpful information from my network about the attractiveness of the grant.

  • I am uncomfortable with how in-group this grant feels. Holly and I have only briefly overlapped professionally; there's definitely no COI. But "fund an OEB PhD to advocate for AI moratorium" feels like something I'd be more dismissive of if I didn't know that Holly was so deep into our shared memesphere. In some ways this situation feels appropriate on the substance -- sharing ideas and professional connections with Holly gives me positive context. In other ways it feels like my judgement might be compromised. (See my last object-level point below.)

2) I feel confused about how I should value this proposal on the object level.

  • I feel somewhat uncomfortable with the role that Conjecture plays in AI discourse. The main thing I'm concerned about is something like "important decision-makers get turned off after hearing unpersuasive and possibly-also-misleading messaging." Some of this is substance, some of this is style. Advocating for a moratorium sounds like shared substance. This makes me nervous about Holly's advocacy.

  • On the other hand, the bull case for Conjecture's advocacy is something like "Overton window-shifting." I am skeptical of this case, but people I trust are more excited about it. I think Holly is better-placed to take advantage of this case in some ways (e.g. more relatable and careful speaker) and worse-placed in other ways (e.g. doesn't have the ML credibility that Connor has through Eleuther).

  • Related to my final meta-level concern: I have the sense that the LessWrong memesphere is becoming less and less relevant to how AI goes, and so am decreasingly excited in proposals that seem to speak to or on behalf of that memesphere.

Overall:

  • I've thought about this grant for too long without making much progress.

  • Seems like the most productive thing to do is to provide early support to Holly and a credible signal to other regrantors.

  • Holly and/or other grantmakers might be more likely to provide me with information that helps me think about whether to give more.

Holly_Elmore avatar

Holly Elmore

over 1 year ago

Thank you, Joel!

I don’t feel that my approach relies on the LessWrong perspective overmuch, though perhaps I lack perspective on that. I’ve felt at odds with the rationality community on this because I’m advocating an approach besides alignment, and a political one at that.

I am in favor of Overton window pushing (based on my experience in vegan advocacy and on other social movements). Although I’m curious to hear what you oppose about Conjecture’s approach.

As for credibility, it would be nice if I knew more about the technology, but I think the question of whether to regulate/slow/stop AI transcends many of the technical details and has more to do with how much risk the people of earth will tolerate from AI experiments. If we can get momentum behind the moratorium position, the necessary expertise will shift to something more political or at the interface of the technology and regulatory apparatus. No one is currently an expert on what we need here (similar to how no one was an AI Safety expert 10 years ago). I’m a generalist and, much as I wish that better qualified people had stepped forward to lead this, they didn’t, and I think I have the organizing skills to get it started.

donated $2,500
joel_bkr avatar

Joel Becker

over 1 year ago

@Holly_Elmore Thanks for picking this up; I totally agree that you're not reflecting some "LessWrong perspective." My previous wording was clumsy. My thinking might be too. I'll have another try at pointing at this cluster of concerns.

  • I'm having a conversation with a senior US policymaker or journalist (maybe not focused on AI) 1 year from now. I say to them: "Holly did X work that attempted to encourage Y person/organization to do Z." Then they say: "who is Y?" Unfortunately, actor Y is not very influential.

    • An organizer with more experience in politics/media or without the shared presentation/ideas might have picked up on this and better targeted their efforts.

    • Perhaps Holly has targeted Y for bad taste reasons (Y is more sexy, or better-known to LW types) or for good efficiency reasons (Holly has second-hand connections at Y, perhaps borrowed from shared-memesphere-connections, which made it easier to engage with Y than an actor with similar influence and lesser connections).

  • A senior journalist engages with Holly 3 months from now. Associating the idea of a moratorium, or even Holly in particular, as coming from a particular memesphere, the journalist takes her less seriously, or seeks to contrast her more strongly with other voices, or comes to see her view as representative of the memesphere in ways that are unhelpful for others' lobbying, or [other bad outcome].

    • After writing, I'm not sure this concern makes sense. The reasoning seems pretty tortured, and the outcomes are not that bad anyway.

    • The better version of this might look more like my Conjecture comment in the next paragraph: people who Holly engages with might persistently take AIS concerns less seriously if pushing a message that they think is crazy.

Regarding Conjecture, I have the sense (I think second-hand from public Matt Clifford communications and a bunch of conversations with [redacted prominent AI safety researcher]) that policymakers get turned off very quickly when they hear the message that everyone's going to die. My big concern is that this turning off persists over time.

I agree that "the question of whether to regulate/slow/stop AI transcends many of the technical details" on the substance. I guess I have a sense that, at least pre-Hinton leaving, AI safety worries felt to many people like they came from amateurs who had misunderstood the nature of the technology, which is part of what turned them off from these concerns. Maybe I think this has an unhelpful interaction with arguing for a window-shifting policy.

After writing out the above, I think I see that my concerns -- even if they do make sense for conversations with policymakers and media elites -- might not port over to political organizing. I could probably get clearer on this if I had a clearer sense of the political organizing activities you might pursue. Could you describe these in more detail?

donated $2,500
Austin avatar

Austin Chen

over 1 year ago

(@joel_bkr I really appreciate your investigation into this, which my own thoughts echo, and am matching with $2500!)

Holly_Elmore avatar

Holly Elmore

over 1 year ago

@joel_bkr Oh, I agree it's a liability to be steeped in the LW/EA community when my goal is to have a broader reach.

> policymakers get turned off very quickly when they hear the message that everyone's going to die

I am of the opinion that, even though everyone really could die from uncontrolled AI, that we should be worried enough to act to prevent consequences well short of that! I don't like creating the impression that we shouldn't be concerned about lesser, but still huge, harms like possible mass unemployment or destruction of our shared social reality. I think it can be confusing, and construed as either a denial that AI will cause other harms or as saying that any harm short of extinction wouldn't be worth preventing. So, while I'm still steeped in the LW memesphere, I wouldn't make this specific error.

It's also true that I'm not an ML expert and it would be better if I were. Not understanding ML deeply could lead advocates to support worse policies. But putting the onus on AI developers to prove safety instead of safety advocates having to prove danger is something that I think anyone can safely advocate for.

Holly_Elmore avatar

Holly Elmore

over 1 year ago

@joel_bkr > I could probably get clearer on this if I had a clearer sense of the political organizing activities you might pursue. Could you describe these in more detail?

I am spinning up and exploring quite a bit right now, which has meant taking a ton of meetings and networking. I'm doing light people organizing already by connecting people with jobs they can do in the broader AI Safety space. I'm not decided yet on what kinds of programs to pursue that would benefit from org structure, but I'm currently:
- planning a multi-city demonstration in November pegged to the UK Summit on AI,
- in the early stages of developing (and still considering whether it's robustly positive enough to be worth the commitment) a "Moratorium Forum" to increase median voter engagement on the topic and sense of confidence in their position on AI Safety,
- finishing some academic/technical work that I hadn't taken all the way to publication before to increase my credibility, and
- working on personal essay writing about AI Safety advocacy topics to publish in magazines (something I've done before).

donated $2,500
Austin avatar

Austin Chen

over 1 year ago

Wanted to call out that Holly has launched a GoFundMe to fund her work independently; it's this kind of entrepreneurial spirit that gives me confidence she'll do well as a movement organizer!

Check it out here: https://www.gofundme.com/f/pause-artificial-general-intelligence

donated $2,500
Austin avatar

Austin Chen

over 1 year ago

I'm excited by this application! I've spoken once with Holly before (I reached out when she signed up for Manifold, about a year ago) and thoroughly enjoy her writing. You can see that her track record within EA is stellar.

My hesitations in immediately funding this out of my own regrantor budget:

  • Is moratorium good or bad? I don't have a strong inside view and am mostly excited by Holly's own track record. I notice not many other funders/core EAs excited for moratorium so far (but this argument might prove too much)

  • Should Holly should pursue this independently, or as part of some other org? I assume she's already considered/discussed this with orgs who might employ her for this work such as FLI or CAIS?

    • I would be even more excited if Holly found a strong cofounder; though this is my bias from tech startups (where founding teams are strongly preferred over individual founders), and I don't know if this heuristic works as well for starting movements.

Holly_Elmore avatar

Holly Elmore

over 1 year ago

I’m now concerned that this proposal is out of scope for Manifund because it involves political advocacy, which I’m discussing with team in the Manifund Discord. But I will take this opportunity to make the case for the proposal as it is was written above as of end of day 7/7/23.

  • Is moratorium good or bad? I don't have a strong inside view and am mostly excited by Holly's own track record. I notice not many other funders/core EAs excited for moratorium so far (but this argument might prove too much)

I left my job at Rethink Priorities to pursue moratorium advocacy because I observed that the people in the AI safety space, both in technical alignment and policy, were biased against political advocacy. Even in EA Animal spaces (where I most recently worked), people seemed not to appreciate how contingent the success of “inside game” initiatives like The Humane League corporate campaigns (to, for example, increase farmed animal cage sizes) depended on the existence of vocal advocacy orgs like Direct Action Everywhere (DxE) and PETA stated the strongest version of their beliefs plainly to the publicly and acted in a way that legibly accorded with that. This sort of “outside game” moves the Overton window and creates external pressure for political or corporate initiatives. Status quo AI Safety is trying to play inside game without this external pressure, and hence it often at the mercy of industry. When I began looking for ways to contribute to pause efforts and learning more about the current ecosystem, I was appalled at some of things I was told. Several people expressed to me that they were afraid to do things the AI companies didn’t like because otherwise they might not cooperate with their org, or with ARC. How good can evals ever be if they are designed not to piss off the labs, who are holding all the cards? The way we get more cards for evals and for government regulations on AI is to create external pressure.

The reason I’m talking about this issue today is that FLI published an (imperfect) call for a 6-month pause and got respected people to sign it. This led to a flurry of common knowledge creation and the revelation that the public is highly receptive to, not only AI Safety as a concept, but moratorium as a solution. I’m still hearing criticism of this letter from EAs today as being “unrealistic”. I’m sorry, but how dense can you be? This letter has been extremely effective. The AI companies lost ground and had to answer to the people they are endangering. AI c-risk went mainstream!

The bottom line is that I don’t think EA is that skilled at “outside game”, which is understandable because in the other EA causes, there was already an outside game going on (like PETA for animal welfare). But in AI Safety, very unusually, the neglected position is the vanguard. The public only just became familiar enough with AI capabilities not to dismiss concerns about AGI out of hand (many of the most senior people in AI Safety seem to be anchored on a time before this was true), so the possibility of appealing to them directly has just opened up. I think that the people in AI Safety space currently— people trained to do technical alignment and the kind of policy research that doesn’t expect to be directly implemented— 1) aren’t appreciating our tactical position, and 2) are invested in strategies that require the cooperation of AI labs or of allies that make them hesitant to simply advocate for the policies they want. This is fine— I think these strategies and relationships are worth maintaining— but someone should be manning the vanguard. As someone without attachments in the AI Safety space, I thought this was something I could offer.

Holly_Elmore avatar

Holly Elmore

over 1 year ago

Should Holly should pursue this independently, or as part of some other org? I assume she's already considered/discussed this with orgs who might employ her for this work such as FLI or CAIS?

I am considering employment at aligned orgs, but I’m strongly attracted to having my independence for some of the reasons discussed in the above comment^. The established orgs have their reputations and alliances to consider. They may have a better shot at achieving change through insider diplomacy and by leveraging connections, and thus it may be the right call for them not speak directly to the public asking for the policies they may actually want. That can be true at the same time that there is a huge opportunity for outside game left on the table. There are many benefits to working with a team, and I may decide that they are worth sacrificing the freedom to pursue the vanguard position (if I do, I would of course return any Manifund money I had been granted). There are grassroots advocacy groups like Pause AI that already exist (mostly in Europe) that I would be able to work with as an org-less organizer. But it seems that the freedom to try this strategy without implicating anyone else is pretty valuable, so I want to see if that is an option.

Holly_Elmore avatar

Holly Elmore

over 1 year ago

  • I would be even more excited if Holly found a strong cofounder; though this is my bias from tech startups (where founding teams are strongly preferred over individual founders), and I don't know if this heuristic works as well for starting movements.

I would love to have a co-founder and assemble a team eventually. My model is that this is best achieved in a situation like mine by diving into the work and attracting the right people with it. I have been working with many people that I adore and work well with, but they are all talented people with many options, most of them in the territory of inside game. Given their expertise and connections, it might be the right move for them to do more traditional AI safety/policy activities. I think it likely that the best co-founder for the kinds of activities I think are most neglected is someone outside the core AI Safety community that I haven’t met yet, but would be likely to within 6 months of dedicated organizing.

I’m also not sure that an org with several employees is the ultimate destiny of this project. It seems possible to me that other, perhaps leaner, structures, maybe more reliant on volunteers and without much cash flow, will be more prudent. So while a co-founder would be ideal to creating a stable, lasting org, I value nimbleness, info value, not being beholden to someone else’s alliances, and ability to pivot quite a bit at this point. So it’s not obvious to me that it would be better to have a cofounder before being funded to get started.

(On a practical note, I might lose opportunities for employment by doing moratorium advocacy. If I know I’m going to be funded for the rest of the year, I can dive in, but if I don’t, I still want to consider working in situations where I have to be more diplomatic. So that consideration is pushing me to try to ascertain money at an earlier stage than I’m sure is ideal for you as the evaluator.)