remmelt avatar
Remmelt Ellen


Program coordinator of AI Safety Camp. Initiator of RIDAISCAM.
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

I helped launch the first AI Safety Camp and now coordinate the program with Linda Linsefors.
My technical research elucidates reasons why the AGI control problem would be unsolvable:

My focus is to set up a legal fund for communities to restrict companies from harmfully scaling AI (deceptively advertised as being generally beneficial to society):
I recently co-organised a legal actions meeting in Brussels to that end – where guilds representing creatives and AI risk fieldbuilders came together to work out strategies to restrict AI data scraping.

Previously, I co-founded Effective Altruism Netherlands. Pre-2021 background here:



remmelt avatar

Remmelt Ellen

5 months ago

@J-C, thank you too for the conversation.

If it's helpful, here are specific critiques of longtermist tech efforts I tweeted:
- Past projects:
- Past funding
- Godlike AI message:
- Counterarguments:
- Gaps in community focus:
- On complexity mismatch:
- On fundamental control limits:
- On comprehensive safety premises:

I have also pushed back against Émile Torres and Timnit Gebru (researchers I otherwise respect):

^– Can imagine those tweets got lost ( I appreciate the searches you did).
You are over-ascribing interpretations somewhat (eg. "social cluster" is a term I use to describe conversational/collaborative connections in social networks), but I get that all you had to go on there was a few hundred characters.

~ ~ ~
I started in 2015 in effective altruism movement-building, and I never imagined myself to become this critical about the actions of the community I was building up.

I also reached my limit of trying to discuss specific concerns with EAs/rationalists/longtermists.
Having a hundred+ conversations to watch interlocutors continue business as usual does this to you.

Maybe this would change if I wrote a Katja-Grace-style post – talking positively from their perspective, asking open-ended questions so readers reflect on what they could explore further, finding ways to build from their existing directions of work so they feel empowered rather than averse to dig deeper, not state any conclusions that conflict with their existing beliefs or sound too strong within the community's Overton window, etc. etc.

Realistically though, people who made a career upskilling in and doing alignment work won't change their path easily, which is understandable. If the status quo for technically-minded researchers is to keep trying to invent new 'alignment solutions' with funding from (mostly) tech guys, then there is little point to clarifying why that would be a dead end.

Likewise, where AI risk people stick mostly to their own intellectual nerdy circles to come up with outreach projects to slow AI (because "we're the only ones who care about extinction risk"), then there is little point of me trying to bridge between them and other communities' perspectives.

~ ~ ~
Manifund doesn't seem like a place to find collaborators beyond, but happy to change my mind:

I am looking for a funder who already relates with the increasing harms of AI-scaling, and who wants to act effectively within society to restrict corporations from scaling further.

A funder who acknowledges critiques of longtermist tech efforts so far (as supporting companies to scale up larger AI models deployed for a greater variety of profitable ends), and who is looking to fund neglected niches beyond.

remmelt avatar

Remmelt Ellen

5 months ago

Your selected quotes express my views well

Note though that the “self-congratulatory vibes” point was in reference to the Misalignment Museum:

And I am skipping over the within-quote commentary ;)

remmelt avatar

Remmelt Ellen

5 months ago

* Note that I was talking about conflicts between the AI Safety community and communities like AI ethics and those being harmed who AI ethics researchers are advocating for (artists and writers, data workers, marginalised tech-exploited ethnic communities, etc).

remmelt avatar

Remmelt Ellen

5 months ago

Thank you for sharing your concerns.

How is suing AI companies in court less likely to cause conflict than the 'good cop' approach you deride?

Suing companies is business as usual. Rather than focus on ideological differences, it focusses on concrete harms done and why those are against the law.

Not that I was talking about conflicts between AI Safety communities like the AI ethics those being harmed who AI ethics researchers are advocating for (artists and writers, data workers, marginalised tech-exploited ethnic communities, etc).

Some amount of conflict with AGI lab folks is inevitable. Our community’s attempts to collaborate with the labs to research the fundamental control problems first and to carefully guide AI development to prevent an arms race did not work out. And not for lack of effort on our side! Frankly, their

reckless behaviour now to reconfigure the world on behalf of the rest of society needs to be called out.

Are you claiming that your mindset and negotiation skills are more constructive?

As I mentioned, I’m not arguing here for introducing a bad cop. I’m arguing for starting lawsuits to get injunctions arranged for widespread harms done (data piracy, model misuses, toxic compute).

What leverage did we have to start with?

The power imbalance was less lopsided. When the AGI companies were in their start-up phase, they were relying a lot more on our support (funding, recruitment, intellectual support) than they do now.

For example, public intellectuals like Nick Bostrom had more of an ability to influence narratives than they do now. Now AGI labs have ratcheted up their own marketing and lobbying and in that way crowd out the debate.

few examples for illustration, but again, others can browse your Twitter:

Could you clarify why those examples are insulting for you?

I am pointing out flaws in how the AI Safety community has acted in aggregate, such as offering increasing funding to DeepMind, OpenAI and then Anthropic. I guess that’s uncomfortable to see in public now, and I’d have preferred that AI Safety researchers had taken this seriously when I expressed concerns in private years ago.

Similarly, I critiqued Hinton for letting his employer Google scale increasingly harmful models based on his own designs for years, and despite his influential position, still not offering much of a useful response now to preventing these developments in his public speaking tours. Scientists in tech have great power to impact the world, and therefore great responsibility to advocate for norms and regulation of their technologies.

Your selected quotes express my views well. I feel you selected them with care (ie. no strawmanning, which I appreciate!).

I think there's some small chance you could convince me that something in this ballpark is a promising avenue for action. But even then, I'd much rather fund you to do something like lead a protest march than to "carefully do the initial coordination and bridge-building required to set ourselves up for effective legal cases.

Thank you for the consideration!