Greg_Colbourn avatar
Greg Colbourn

@Greg_Colbourn

Global moratorium on AGI, now. Founder of CEEALAR.

https://twitter.com/gcolbourn
$85,500total balance
$85,500charity balance
$0cash balance

$0 in pending offers

Outgoing donations

Comments

Greg_Colbourn avatar

Greg Colbourn

4 days ago

(This was 1 Bitcoin btw. Austin helped me with the process of routing it to Manifund, allowing me to donate ~32% more, factoring in avoiding capital gains tax in the UK).

Greg_Colbourn avatar

Greg Colbourn

4 days ago

I've been impressed with both Holly and Pause AI US, and Joep and Pause AI Global, and intend to donate a similar amount to Pause AI Global.

Greg_Colbourn avatar

Greg Colbourn

4 days ago

It's more important than ever that PauseAI is funded. Pretty much the only way we're going to survive the next 5-10 years is by such efforts being successful to the point of getting a global moratorium on further AGI/ASI development. There's no point being rich when the world ends. I encourage others with 7 figures or more of net worth to donate similar amounts. And I'm disappointed that all the big funders in the AI Safety space are still overwhelmingly focused on Alignment/Safety/Control when it seems pretty clear that those aren't going to save us in time (if ever), given the lack of even theoretical progress, let alone practical implementation.

Greg_Colbourn avatar

Greg Colbourn

10 months ago

Supporting this because it is useful to illustrate how there are basically no viable AI Alignment plans for avoiding doom with short timelines (which is why I think we need a Pause/moratorium). Impressed by how much progress Kabir and team have made in the last few months, and look forward to seeing the project grow in the next few months.

Greg_Colbourn avatar

Greg Colbourn

over 1 year ago

This research seems promising. I'm pledging enough to get it to proceed. In general we need more of this kind of research to establish consensus on LLMs (foundation models) basically being fundamentally uncontrollable black boxes (that are dangerous at the frontier scale). I think this can lead - in conjunction with laws about recalls for rule breaking / interpretability - to a de facto global moratorium on this kind of dangerous (proto-)AGI. (See: https://twitter.com/gcolbourn/status/1684702488530759680)

Transactions

ForDateTypeAmount
Manifund Bank3 days agodeposit+85500
PauseAI US 2025 through Q24 days agoproject donation90000
Manifund Bank7 days agodeposit+90000
AI-Plans.com 10 months agoproject donation5000
Manifund Bank10 months agodeposit+3800
Alignment Is Hardover 1 year agoproject donation3800
Manifund Bankover 1 year agodeposit+5000