🐸
J C

@J-C

$0total balance
$0charity balance
$0cash balance

$0 in pending offers

Comments

🐸

J C

over 1 year ago

Oh and Remmelt, you wanted clarification on why I see those examples as insults (as opposed to constructive debate). It's an interesting question and I think the answer for me is often, "You use descriptors with only neutral or negative connotations without being specific about the accusation." Which makes it hard to progress the disagreement and simply leaves people feeling negative about the subject.

For example, to just take the first quote: "I personally have tried to be reasonable for years" implies that people have not responded appropriately to reasonable disagreement (rather than simply not finding your arguments persuasive); "social cluster" sounds like an accusation of harmful nepotism; "monolithic" has negative connotations but I'm not sure what the specific disagreement is; "assumption" suggests an absence of reasoning/argument; "do good" in inverted commas to me is mocking, saying that not only are they actually doing harm, they're not even trying to do good?

IMO it would have been much better to say something like, "The fact that X is friends with Y creates a conflict of interest that makes me more skeptical of claim Z" (and preferably some recognition of something positive but I know people only have so much time and energy) or to not write the tweet at all.

🐸

J C

over 1 year ago

Thanks for engaging, Remmelt.

Kerry, I dunno man, constructive debate is good for epistemic health - indeed I've funded it before and I imagine it's the kind of thing a lot of people here are looking for. But I don't think regular insults on social media are a healthy norm. Others may not agree, but hopefully you can agree that such behaviour is at least relevant to a request for funding to handle delicate political situations better than other actors have.

🐸

J C

over 1 year ago

I must admit, I find this proposal confusing in several ways.

And I'm a bit worried that you'll end up getting funding anyway, because some funders are (understandably) bored of funding large organisations with strong track records and following the advice of others and instead want to discover their own small interesting thing to support.

So here are my concerns:

  1. How is suing AI companies in court less likely to cause conflict than the 'good cop' approach you deride? You say that current approaches to reducing AI extinction risk initiates conflict with groups working on other AI ethics issues, but some amount 'conflict' is inevitable when you're fighting for one cause instead of another, and people emphasise the overlap with other AI ethics issues where they can, e.g. see Open Philanthropy here for a recent example https://www.washingtonpost.com/technology/2023/07/05/ai-apocalypse-college-students/. This approach seems far less likely to cause conflict than taking people to court?

  2. As regards, "In the process, we lost much of our leverage to hold labs accountable for dangerous unilateralist actions (Altman, Amodei or Hassabis can ignore at little social cost our threat models and claim they have alignment researchers to take care of the “real” risks)." What leverage did we have to start with? I thought this is how we gained some amount of leverage? Maybe not in a confrontational 'holding labs accountable' kind of way, more acquiring opportunities to present arguments in depth, developing trust, and gaining direct decision-making power. It's a hotly debated question whether AI safety efforts to date have ended up doing more harm than good and I think it's important to convey that - you shouldn't just say, "Well now we're ahead of China" / "Well now there's political momentum behind governance" / "Well now the top AI labs have substantial safety teams" / "Well now our timelines are shorter" / "Well now the AI labs can ignore our threat models." Also, why do you think they only "claim" to have alignment researchers? And why is "real" in inverted commas? Do you actually think copyright infringement and environmental harms from computers etc. are the "real" risks?

  3. "Few individuals in AI Safety have the mindset and negotiation skills to constructively put pressure onto AGI R&D labs." Are you claiming that your mindset and negotiation skills are more constructive? I can't say I agree, but others are welcome to browse your Twitter and make up their own minds.

  4. On that, I also think it's a bit rude and misleading to ask the EA/rationalism/longtermism community to pay you a "humble" $80,000pa pro rata and name-drop community members in your proposal, while simultaneously publicly insulting us all (and even Hinton for some reason) elsewhere on a regular basis. A few examples for illustration, but again, others can browse your Twitter:

    1. "I personally have tried to be reasonable for years about bringing up and discussing potential harms of the leaders of the EA/rationality/longtermist social cluster making monolithic assumptions about the need to deploy or update systems to “do good” to the rest society."

    2. "Despite various well-meaning smaller careful initiatives [do you mean yours? yours are well-meaning and careful and ours aren't? I've gotta say, it looks like the opposite from where I'm standing], tech-futurist initiatives connected to EA/longtermism have destabilised society in ways that will take tremendously nuanced effort to recover from, and sucked a generation of geeks into efficiently scaling dangerous tech."

    3. "the self-congratulory vibe of alignment people who raised the alarms (and raise funds for DeepMind and OpenAI, shhh). And the presumptive “we can decide how to take over the world” white guys vibe. And the stuckness in following the whims and memes of your community."

    4. "Sums up the longtermist AI con game."

    5. "I was involved as effective altruism community builder from 2015, and I reflected a lot since on what was happening in the community and what I was involved in. What

      @timnitGebru and @xriskology have written is holistically correct, is my own conclusion."

I think there's some small chance you could convince me that something in this ballpark is a promising avenue for action. But even then, I'd much rather fund you to do something like lead a protest march than to "carefully do the initial coordination and bridge-building required to set ourselves up for effective legal cases." (I might be persuaded to fund Katja or maybe this 'longtermist org' you mention once I had more info though.)