Greg_Colbourn avatar
Greg Colbourn

@Greg_Colbourn

Global moratorium on AGI, now. Founder of CEEALAR.

https://twitter.com/gcolbourn
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

Outgoing donations

Comments

Greg_Colbourn avatar

Greg Colbourn

about 2 months ago

Note that this is $90k less Manifund fees (same as my donation to PauseAI US - https://manifund.org//projects/pauseai-us-2025-through-q2?tab=comments#2d85cbfd-d392-447c-ad7f-da056aa77928 - just the fees are taken out first here)

Greg_Colbourn avatar

Greg Colbourn

about 2 months ago

It's more important than ever that PauseAI is funded. Pretty much the only way we're going to survive the next 5-10 years is by such efforts being successful to the point of getting a global moratorium on further AGI/ASI development. There's no point being rich when the world ends. I encourage others with 7 figures or more of net worth to donate similar amounts. And I'm disappointed that all the big funders in the AI Safety space are still overwhelmingly focused on Alignment/Safety/Control when it seems pretty clear that those aren't going to save us in time (if ever), given the lack of even theoretical progress, let alone practical implementation.

Note that this is to be considered general funding to PauseAI Global, maxing out the volunteer stipends fundraiser and funding additional hires (from OP: "If we surpass our goal, we will use that money to fund additional hires for PauseAI Global (e.g. a Social Media Director).")

Greg_Colbourn avatar

Greg Colbourn

about 2 months ago

(This was 1 Bitcoin btw. Austin helped me with the process of routing it to Manifund, allowing me to donate ~32% more, factoring in avoiding capital gains tax in the UK).

Greg_Colbourn avatar

Greg Colbourn

about 2 months ago

I've been impressed with both Holly and Pause AI US, and Joep and Pause AI Global, and intend to donate a similar amount to Pause AI Global.

Greg_Colbourn avatar

Greg Colbourn

about 2 months ago

It's more important than ever that PauseAI is funded. Pretty much the only way we're going to survive the next 5-10 years is by such efforts being successful to the point of getting a global moratorium on further AGI/ASI development. There's no point being rich when the world ends. I encourage others with 7 figures or more of net worth to donate similar amounts. And I'm disappointed that all the big funders in the AI Safety space are still overwhelmingly focused on Alignment/Safety/Control when it seems pretty clear that those aren't going to save us in time (if ever), given the lack of even theoretical progress, let alone practical implementation.

Greg_Colbourn avatar

Greg Colbourn

12 months ago

Supporting this because it is useful to illustrate how there are basically no viable AI Alignment plans for avoiding doom with short timelines (which is why I think we need a Pause/moratorium). Impressed by how much progress Kabir and team have made in the last few months, and look forward to seeing the project grow in the next few months.

Greg_Colbourn avatar

Greg Colbourn

over 1 year ago

This research seems promising. I'm pledging enough to get it to proceed. In general we need more of this kind of research to establish consensus on LLMs (foundation models) basically being fundamentally uncontrollable black boxes (that are dangerous at the frontier scale). I think this can lead - in conjunction with laws about recalls for rule breaking / interpretability - to a de facto global moratorium on this kind of dangerous (proto-)AGI. (See: https://twitter.com/gcolbourn/status/1684702488530759680)

Transactions

ForDateTypeAmount
PauseAI local communities - volunteer stipendsabout 2 months agoproject donation85500
Manifund Bankabout 2 months agodeposit+85500
PauseAI US 2025 through Q2about 2 months agoproject donation90000
Manifund Bankabout 2 months agodeposit+90000
AI-Plans.com 12 months agoproject donation5000
Manifund Bank12 months agodeposit+3800
Alignment Is Hardover 1 year agoproject donation3800
Manifund Bankover 1 year agodeposit+5000