0

Facilitate lawsuits to restrict corporate AI-scaling

Not fundedGrant
$0raised

Project summary

This year, researchers (eg. Katja Grace) started exploring the possibility of slowing AI scaling.

Communities on the frontline already work to restrict increasing harms:
⒈ Digital freelancers whose copyrighted 𝚍̲𝚊̲𝚝̲𝚊̲ (art, photos, writing) are copied to train AI models used to compete in their market.
⒉ Product safety engineers/auditors identifying a neglect of comprehensive design and safety tests for intended 𝚞̲𝚜̲𝚎̲𝚜̲.
⒊ Environmentalists tracking a rise in toxic emissions from hardware 𝚌̲𝚘̲𝚖̲𝚙̲𝚞̲𝚝̲𝚎̲.

I’m connecting leaders and legal experts of each community.
We’re identifying cases to restrict AI data piracy, misuses, and compute. Court injunctions are a time-tested method for restricting harmful corporate activity, and do not require new laws or international cooperation.

At our Brussels meeting on data piracy, a longtermist org’s head of litigation decided to 𝘱𝘳𝘰𝘣𝘢𝘣𝘭𝘺 hire 2 experts to evaluate European legal routes.

Meanwhile, I am raising funds to cover a starter budget.

Many considerations are below, but I’ve also intentionally left out some details.
If you have specific questions, feel free to plan a call: calendly.com/remmelt/30min/

Project goals

Let me zoom out before zooming in:

HIGH-LEVEL CRUXES
Most AI Safety researchers who have studied the AGI control problem for a decade or longer, seem to have come to a similar conclusion, derived through various paths of reasoning: 

Solving the problem comprehensively enough for effectively unbounded-optimizing machinery would at our current pace take a minimum of many decades. eg. see Soares’ elegant argument on serial alignment efforts: lesswrong.com/posts/vQNJrJqebXEWjJfnz/a-note-about-differential-technological-development

Researchers like Yampolskiy also investigated various fundamental limits to AGI controllability. Landry and I even argue that the extent of available control is insufficient to prevent long-term convergence on extinction: lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

For the grant’s purpose, the distinction between whether the AGI control problem would be really hard or unsolvable does not really matter.


What matters:
⒈ AI safety researchers who thought deeply and worked extensively on the control problem for a decade or longer have concluded that relatively little to no progress has been made.

⒉ Over the last decade, AI companies made much more “progress” commensurately on scaling neural-network-based architectures to the point of now being economically-efficiently capable of emulating many human cognitive tasks, and starting to be used to replace human workers.

Argument:
⒊ If we could slow or restrict corporate AI-scaling, this would give AI Safety researchers the time to catch up on developing safe control methods, or find out that control is fundamentally intractable.

⒋ Counterargument:
But can we slow AI-scaling, really? Even if we could, is there any approach that does not get us embroiled in irrational political conflicts?

~ ~ ~
POSSIBLE APPROACHES
Approaches that risk causing unforeseen side-effects and political conflicts: 
- to publicize books or media articles about ‘powerful godlike AI’.
- to lobby for regulations that take as a given the continued development of 'general purpose-AI’.
- to fund or research safety on the margin inside AGI R&D labs or government AI taskforces, who use safety results as a marketing and recruitment tool.

Specifically, risks around:
1. Epistemics:
Misrepresenting the problem in oversimplified ways that encourage further AI development, and obfuscate the corporate incentives and technical complexity involved.
2. Allowances:
Opening up paths for corporations to be given a free pass (eg. through safety-washing, national interests) to continue scaling AI models for influence and profit.
⒊ Conflicts:
Initiating a messaging ‘tug of war’ with other communities advocating to prevent increasing harms (eg. artists and writers, marginalized communities targeted with the use of tech, and AI ethics researchers acting on their behalf).


Contrast with another approach:  

Sue AI companies in court.
1. Legal cases force each party to focus on evidence of concrete damages and on the imposing of costs and restrictions on the party that causes the damage. It also gets at the economic crux of the matter – without enforcing costs and injunctions against harms done, AI companies will compete to scale models that can be used for a wider variety of profitable purposes.

2. Unlike future risks, harms are concrete and legally targetable (twitter.com/RemmeltE/status/1666513433263480852). Harms are easier to cut at their source, such to block off paths to extinction too. Multi-stakeholder governance efforts around preventing ambiguously-defined future risks leaves room/loopholes for AI corporations to keep scaling ahead anyway.

⒊ Offering legal funding for injunctions against harms would inspire cross-collaborations between the AI Safety community and other cash-strapped communities who have started to get cross with us.

This no-nonsense response to the increasing harms would even heal divides within AI Safety itself, where some (like me) are very concerned about how much support the community has offered to AGI R&D labs in exchange for surmised marginal safety improvements (forum.effectivealtruism.org/posts/XZDSBSpr897eR6cBW/what-did-ai-safety-s-specific-funding-of-agi-r-and-d-labs).

David Kreuger aptly describes the mindset we are concerned about:
❝ ‘These guys are building stuff that might destroy the world. But we have to…work with them to try and mitigate things a little bit.’

❝ As opposed to just saying: ‘That’s wrong. That’s bad. Nobody should be doing it. I’m not going to do it. I’m not going to be complicit.’
(twitter.com/DavidSKrueger/status/1669999795677831169)

The AI Safety community has been in “good cop” mode with AGI R&D labs for a decade, while watching those labs develop and scale training of AlphaZero, GPT, Claude, and now Gemini. In the process, we lost much of our leverage to hold labs accountable for dangerous unilateralist actions (Altman, Amodei or Hassabis can ignore at little social cost our threat models and claim they have alignment researchers to take care of the “real” risks).

I won’t argue for adding in a “bad cop” for balance.
Few individuals in AI Safety have the mindset and negotiation skills to constructively put pressure onto AGI R&D labs, and many want to maintain collaborative ties with labs like Anthropic.

Funding legal cases though seems a reasonable course of action – both from the perspective of researchers aiming to restrict pathways to extinction, and from the perspective of communities already harmed by unchecked AI data scraping, model misuses, and environmentally toxic compute.

~ ~ ~
FIRST STEPS
The goal is to restrict three dimensions over which AI companies consolidate power and do harm:
1. data piracy.
2. misuses of models for profit and for/with geopolitical influence.
⒊ compute toxic to the environment.


See also this report: ainowinstitute.org/general/2023-landscape-executive-summary#:~:text=three%20key%20dimensions%3A


Rough order of focus:
1. Data piracy is the first focus, since there are by my count now 25+ organizations acting on behalf of communities harmed by the extraction and algorithmic misuse of their data by AI companies. Copyright, privacy, and work contract violations are pretty straight-forward to establish, particularly in the EU.
2. Lawsuits against AI engineering malpractice and negligent uses of AI models come next. Liability is trickier to establish here (given ToS agreements with API developers, and so on) and will need advice and advocacy from product safety experts from various industries (a medical safety device engineer and I are starting those conversations).
⒊ Finally, while climate change litigation has been on the rise (lse.ac.uk/granthaminstitute/publication/global-trends-in-climate-change-litigation-2022/), I expect it will take years for any litigating organization to zero in on the increasing CO₂ emissions and other eco-toxic effects caused by corporate reinvestments in mines, fab labs, and server farms.

Over the last six months, I along with diverse collaborators have built connections with organizations acting to restrict AI data piracy.

I talked with leaders of the Concept Art Association, European Guild for AI Regulation, National Association for Voice Actors, The Authors Guild, Distributed AI Research Institute, among others. We are also connecting with experienced lawyers through several organisations.

Last month, I co-organised the Legal Actions Day in Brussels (hosted by the International Center for Future Generations).

Two results from our meetings:

1. Legal prioritization research: 
The head of litigation of a longtermist organization decided to probably hire two legal experts for a month to research and weigh up different legal routes for restricting data scraping in the EU. This depends on whether the organization receive funding for that – an evaluator working for a high net-worth donor is now making the case to the donor.

2. Pre-litigation research for an EU copyright case:
Two class-action lawsuits have been filed against OpenAI for copyright infringement in their scraping of creatives’ works (buttericklaw.com). Surprisingly, no copyright lawsuits representing creatives have been started yet in Europe against major AI companies (complicating factor: class-action lawsuits have only just been introduced, but only for consumers). We are working with the Good Lobby to arrange pro-bono lawyers for initial legal advice, but eventually we will need to pay attorneys.

Up to now, I was coordinating this in my spare time. But my funding just ran out at AI Safety Camp. It looks likely I can arrange funding from one or more high net-worth donors in 3+ months’ time, but only after the legal prioritization research has been done.

I’m therefore asking for a grant to cover the gap, so I can dedicate time to carefully do the initial coordination and bridge-building required to set ourselves up for effective legal cases.

How will this funding be used?

First to coordinate this project:
$40,000 to cover 6 months of pay for Remmelt (alternatively, to fund one of below).

Then, if more funding available, pay for one of these:
$40,000 to pay Tony Trupia for ongoing campaigning as pro se litigant against OpenAI.

$45,000 for a pre-litigation research budget for a class-action lawsuit by data workers against OpenAI currently being prepared by a law group.
$45,000 for a pre-litigation research budget for a European copyright case.
$48,000 for 2 legal experts (4 weeks at ~$150/h) to do Europe-focussed legal prioritisation research at a longtermist org.

Note:
- I also submitted an application to Lightspeed, covering my pay only.
- I expanded our starter budget here, but kept it humble given that most regrantors seem to focus on technical solutions.
- The more funding, the faster we can facilitate new lawsuits to restrict data scraping and then model misuses. This is a new funding niche that can absorb millions of dollars (I have various funding opportunities lined up).
- Once we raise funds beyond $85K, I intend to set up a fiscally sponsored account through Player's Philanthropy Fund for holding uncommitted legal funds. We will then also recruit a panel of legal experts for advising legal routes, taking inspiration from digitalfreedomfund.org.

What is your (team's) track record on similar projects?

Please see end of 'project goals' for initial coordination work I carried out.

I am an experienced fieldbuilder, having co-founded and run programs for AI Safety Camp and EA Netherlands.

How could this project be actively harmful?

If we prepare poorly for a decisive legal case, that may result in the judge dismissing the arguments. This in turn could set a bad precedent (for how judges apply the laws to future cases) and/or hinder a class of plaintiffs from suing the same AI company again.

Also, details could leak out to AI companies we are intending to sue, allowing them to prepare a legal defense (which btw is why I'm sparing about details here).

What other funding is this person or project getting?

None at the moment, except for compensation of my hours spent coordinating legal action workshops.

🐸

J C

over 1 year ago

I must admit, I find this proposal confusing in several ways.

And I'm a bit worried that you'll end up getting funding anyway, because some funders are (understandably) bored of funding large organisations with strong track records and following the advice of others and instead want to discover their own small interesting thing to support.

So here are my concerns:

  1. How is suing AI companies in court less likely to cause conflict than the 'good cop' approach you deride? You say that current approaches to reducing AI extinction risk initiates conflict with groups working on other AI ethics issues, but some amount 'conflict' is inevitable when you're fighting for one cause instead of another, and people emphasise the overlap with other AI ethics issues where they can, e.g. see Open Philanthropy here for a recent example https://www.washingtonpost.com/technology/2023/07/05/ai-apocalypse-college-students/. This approach seems far less likely to cause conflict than taking people to court?

  2. As regards, "In the process, we lost much of our leverage to hold labs accountable for dangerous unilateralist actions (Altman, Amodei or Hassabis can ignore at little social cost our threat models and claim they have alignment researchers to take care of the “real” risks)." What leverage did we have to start with? I thought this is how we gained some amount of leverage? Maybe not in a confrontational 'holding labs accountable' kind of way, more acquiring opportunities to present arguments in depth, developing trust, and gaining direct decision-making power. It's a hotly debated question whether AI safety efforts to date have ended up doing more harm than good and I think it's important to convey that - you shouldn't just say, "Well now we're ahead of China" / "Well now there's political momentum behind governance" / "Well now the top AI labs have substantial safety teams" / "Well now our timelines are shorter" / "Well now the AI labs can ignore our threat models." Also, why do you think they only "claim" to have alignment researchers? And why is "real" in inverted commas? Do you actually think copyright infringement and environmental harms from computers etc. are the "real" risks?

  3. "Few individuals in AI Safety have the mindset and negotiation skills to constructively put pressure onto AGI R&D labs." Are you claiming that your mindset and negotiation skills are more constructive? I can't say I agree, but others are welcome to browse your Twitter and make up their own minds.

  4. On that, I also think it's a bit rude and misleading to ask the EA/rationalism/longtermism community to pay you a "humble" $80,000pa pro rata and name-drop community members in your proposal, while simultaneously publicly insulting us all (and even Hinton for some reason) elsewhere on a regular basis. A few examples for illustration, but again, others can browse your Twitter:

    1. "I personally have tried to be reasonable for years about bringing up and discussing potential harms of the leaders of the EA/rationality/longtermist social cluster making monolithic assumptions about the need to deploy or update systems to “do good” to the rest society."

    2. "Despite various well-meaning smaller careful initiatives [do you mean yours? yours are well-meaning and careful and ours aren't? I've gotta say, it looks like the opposite from where I'm standing], tech-futurist initiatives connected to EA/longtermism have destabilised society in ways that will take tremendously nuanced effort to recover from, and sucked a generation of geeks into efficiently scaling dangerous tech."

    3. "the self-congratulory vibe of alignment people who raised the alarms (and raise funds for DeepMind and OpenAI, shhh). And the presumptive “we can decide how to take over the world” white guys vibe. And the stuckness in following the whims and memes of your community."

    4. "Sums up the longtermist AI con game."

    5. "I was involved as effective altruism community builder from 2015, and I reflected a lot since on what was happening in the community and what I was involved in. What

      @timnitGebru and @xriskology have written is holistically correct, is my own conclusion."

I think there's some small chance you could convince me that something in this ballpark is a promising avenue for action. But even then, I'd much rather fund you to do something like lead a protest march than to "carefully do the initial coordination and bridge-building required to set ourselves up for effective legal cases." (I might be persuaded to fund Katja or maybe this 'longtermist org' you mention once I had more info though.)

remmelt avatar

Remmelt Ellen

over 1 year ago

Thank you for sharing your concerns.

How is suing AI companies in court less likely to cause conflict than the 'good cop' approach you deride?

Suing companies is business as usual. Rather than focus on ideological differences, it focusses on concrete harms done and why those are against the law.

Not that I was talking about conflicts between AI Safety communities like the AI ethics those being harmed who AI ethics researchers are advocating for (artists and writers, data workers, marginalised tech-exploited ethnic communities, etc).

Some amount of conflict with AGI lab folks is inevitable. Our community’s attempts to collaborate with the labs to research the fundamental control problems first and to carefully guide AI development to prevent an arms race did not work out. And not for lack of effort on our side! Frankly, their

reckless behaviour now to reconfigure the world on behalf of the rest of society needs to be called out.

Are you claiming that your mindset and negotiation skills are more constructive?

As I mentioned, I’m not arguing here for introducing a bad cop. I’m arguing for starting lawsuits to get injunctions arranged for widespread harms done (data piracy, model misuses, toxic compute).

What leverage did we have to start with?

The power imbalance was less lopsided. When the AGI companies were in their start-up phase, they were relying a lot more on our support (funding, recruitment, intellectual support) than they do now.

For example, public intellectuals like Nick Bostrom had more of an ability to influence narratives than they do now. Now AGI labs have ratcheted up their own marketing and lobbying and in that way crowd out the debate.

few examples for illustration, but again, others can browse your Twitter:

Could you clarify why those examples are insulting for you?

I am pointing out flaws in how the AI Safety community has acted in aggregate, such as offering increasing funding to DeepMind, OpenAI and then Anthropic. I guess that’s uncomfortable to see in public now, and I’d have preferred that AI Safety researchers had taken this seriously when I expressed concerns in private years ago.

Similarly, I critiqued Hinton for letting his employer Google scale increasingly harmful models based on his own designs for years, and despite his influential position, still not offering much of a useful response now to preventing these developments in his public speaking tours. Scientists in tech have great power to impact the world, and therefore great responsibility to advocate for norms and regulation of their technologies.

Your selected quotes express my views well. I feel you selected them with care (ie. no strawmanning, which I appreciate!).

I think there's some small chance you could convince me that something in this ballpark is a promising avenue for action. But even then, I'd much rather fund you to do something like lead a protest march than to "carefully do the initial coordination and bridge-building required to set ourselves up for effective legal cases.

Thank you for the consideration!

remmelt avatar

Remmelt Ellen

over 1 year ago

*Note that

remmelt avatar

Remmelt Ellen

over 1 year ago

* Note that I was talking about conflicts between the AI Safety community and communities like AI ethics and those being harmed who AI ethics researchers are advocating for (artists and writers, data workers, marginalised tech-exploited ethnic communities, etc).

remmelt avatar

Remmelt Ellen

over 1 year ago

Your selected quotes express my views well

Note though that the “self-congratulatory vibes” point was in reference to the Misalignment Museum: https://twitter.com/RemmeltE/status/1635123487617724416

And I am skipping over the within-quote commentary ;)

🐤

Kerry Vaughan

over 1 year ago

I take JC's comment, particularly point 4) to be an attempt to discredit Remmelt for funding because he has been critical of the EA/rationalism/longtermism communities on Twitter. This strikes me as a bad precedent to set and quite harmful to the epistemic health of these communities.

I have reached out to Remmelt to offer him $500 in funding (off of this platform) as a show of support for the courage it takes to share one’s criticisms of a community despite entangled funding relationships. I’d like to see more courage of this type, not less.

🐸

J C

over 1 year ago

Thanks for engaging, Remmelt.

Kerry, I dunno man, constructive debate is good for epistemic health - indeed I've funded it before and I imagine it's the kind of thing a lot of people here are looking for. But I don't think regular insults on social media are a healthy norm. Others may not agree, but hopefully you can agree that such behaviour is at least relevant to a request for funding to handle delicate political situations better than other actors have.

🐸

J C

over 1 year ago

Oh and Remmelt, you wanted clarification on why I see those examples as insults (as opposed to constructive debate). It's an interesting question and I think the answer for me is often, "You use descriptors with only neutral or negative connotations without being specific about the accusation." Which makes it hard to progress the disagreement and simply leaves people feeling negative about the subject.

For example, to just take the first quote: "I personally have tried to be reasonable for years" implies that people have not responded appropriately to reasonable disagreement (rather than simply not finding your arguments persuasive); "social cluster" sounds like an accusation of harmful nepotism; "monolithic" has negative connotations but I'm not sure what the specific disagreement is; "assumption" suggests an absence of reasoning/argument; "do good" in inverted commas to me is mocking, saying that not only are they actually doing harm, they're not even trying to do good?

IMO it would have been much better to say something like, "The fact that X is friends with Y creates a conflict of interest that makes me more skeptical of claim Z" (and preferably some recognition of something positive but I know people only have so much time and energy) or to not write the tweet at all.

remmelt avatar

Remmelt Ellen

over 1 year ago

@J-C, thank you too for the conversation.

If it's helpful, here are specific critiques of longtermist tech efforts I tweeted:
- Past projects: twitter.com/RemmeltE/status/1626590147373588489
- Past funding twitter.com/RemmeltE/status/1675758869728088064
- Godlike AI message: twitter.com/RemmeltE/status/1653757450472898562
- Counterarguments: twitter.com/RemmeltE/status/1647206044928557056
- Gaps in community focus: twitter.com/RemmeltE/status/1623226789152841729
- On complexity mismatch: twitter.com/RemmeltE/status/1666433433164234752
- On fundamental control limits: twitter.com/RemmeltE/status/1665099258461036548
- On comprehensive safety premises: twitter.com/RemmeltE/status/1606552635716554752

I have also pushed back against Émile Torres and Timnit Gebru (researchers I otherwise respect):
- twitter.com/RemmeltE/status/1672943510947782657
- twitter.com/RemmeltE/status/1620596011117993984

^– Can imagine those tweets got lost ( I appreciate the searches you did).
You are over-ascribing interpretations somewhat (eg. "social cluster" is a term I use to describe conversational/collaborative connections in social networks), but I get that all you had to go on there was a few hundred characters.

~ ~ ~
I started in 2015 in effective altruism movement-building, and I never imagined myself to become this critical about the actions of the community I was building up.

I also reached my limit of trying to discuss specific concerns with EAs/rationalists/longtermists.
Having a hundred+ conversations to watch interlocutors continue business as usual does this to you.

Maybe this would change if I wrote a Katja-Grace-style post – talking positively from their perspective, asking open-ended questions so readers reflect on what they could explore further, finding ways to build from their existing directions of work so they feel empowered rather than averse to dig deeper, not state any conclusions that conflict with their existing beliefs or sound too strong within the community's Overton window, etc. etc.

Realistically though, people who made a career upskilling in and doing alignment work won't change their path easily, which is understandable. If the status quo for technically-minded researchers is to keep trying to invent new 'alignment solutions' with funding from (mostly) tech guys, then there is little point to clarifying why that would be a dead end.

Likewise, where AI risk people stick mostly to their own intellectual nerdy circles to come up with outreach projects to slow AI (because "we're the only ones who care about extinction risk"), then there is little point of me trying to bridge between them and other communities' perspectives.

~ ~ ~
Manifund doesn't seem like a place to find collaborators beyond, but happy to change my mind:

I am looking for a funder who already relates with the increasing harms of AI-scaling, and who wants to act effectively within society to restrict corporations from scaling further.

A funder who acknowledges critiques of longtermist tech efforts so far (as supporting companies to scale up larger AI models deployed for a greater variety of profitable ends), and who is looking to fund neglected niches beyond.