Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
Richard avatarRichard avatar
Richard Ngo

@Richard

regrantor

AI safety and governance researcher

x.com/richardmcngo

Donate

This is a donation to this user's regranting budget, which is not withdrawable.

Sign in to donate
$25,000total balance
$20,000charity balance
$0cash balance

$5,000 in pending offers

About Me

I'm most interested in funding fundamental science related to neural networks. I'm also interested in funding mechanisms for better information aggregation and verification.

Outgoing donations

Ambitious AI Alignment Seminar
$5000
PENDING
Groundless Alignment Residency 2025
$15000
3 months ago
Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
$100000
7 months ago
10th edition of AI Safety Camp
$15000
almost 2 years ago

Comments

Ambitious AI Alignment Seminar
Richard avatar

Richard Ngo

8 days ago

Looks exciting. My personal view is that there's a lot of progress waiting to be made on theoretical/agent foundations research. The quality of the program will of course depend a lot on the quality of fellows; I'm curious if there are many people already on your radar, or if you think you have good leads there.

A few other thoughts:

- I think trying to persuade people that the alignment problem is hard is often counterproductive. The mindset of "I need to try to solve a extremely difficult problem" is not very conducive to thinking up promising research directions. More than anything else, I'd like people to come out of this with a sense of why the alignment problem is interesting. Happy to talk more about this in a call.

- Some of the selection criteria seem a bit counterproductive. a) "Decent team players, non-disruptive to the group cohesion" seems like a bad way to select for amazing scientists, and might rule out some of the most interesting candidates. And b) "would care about saving the world and all their friends if they thought human extinction was likely" seems likely to select mainly for EA-type motivations which IMO also make people worse at open-ended theoretical research. Meanwhile c) "highly technically skilled" is reasonable, but I care much more about clarity of thinking than literal technical skills.

If the organizers have good reasons to expect high-quality candidates I expect I'd pitch in 5-10k.

Groundless Alignment Residency 2025
Richard avatar

Richard Ngo

3 months ago

I’m very excited about the groundless agenda.

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
Richard avatar

Richard Ngo

8 months ago

Building a culture of hands-on experimentation is probably the best way to do AI safety outreach, and Apart seems to have executed on it really well.

10th edition of AI Safety Camp
Richard avatar

Richard Ngo

almost 2 years ago

Seems important and underfunded!

Transactions

ForDateTypeAmount
Manifund Bankabout 2 months agodeposit+25000
Groundless Alignment Residency 20253 months agoproject donation15000
Manifund Bank3 months agowithdraw750
Manifund Bank3 months agodeposit+15750
Keep Apart Research Going: Global AI Safety Research & Talent Pipeline7 months agoproject donation100000
Manifund Bank9 months agodeposit+100000
10th edition of AI Safety Campalmost 2 years agoproject donation15000
Manifund Bankalmost 2 years agodeposit+15000