Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
Thomas avatarThomas avatar
Thomas Larsen

@Thomas

regrantor

I work at the AI Futures Project, most recently on AI 2027.

https://www.linkedin.com/in/thomas-larsen/

Donate

This is a donation to this user's regranting budget, which is not withdrawable.

Sign in to donate
$10,000total balance
$10,000charity balance
$0cash balance

$0 in pending offers

About Me

I would like to fund:

  1. Talented early career AI governance folks who could be accelerated with small amounts of funding.

  2. Established think tanks who would like to do work on AI safety.

In the near term, I'm excited about work on transparency measures for frontier AI developers, because I think that this will be helpful for improving the strategic awareness of governments and make it harder for an AI lab to pursue an intelligence explosion in secret.

Once there is more societal wakeup, I think there should be an international treaty to not build superintelligence. I'm excited about work on verification methods to enforce treaties, plus work on developing agreements that could become viable.

Outgoing donations

Synthesizing Standalone World-Models
$40000
8 days ago
Acausal research and interventions
$30000
5 months ago
AI Digest
$10000
6 months ago
Lightcone Infrastructure
$10000
6 months ago

Comments

Synthesizing Standalone World-Models
Thomas avatar

Thomas Larsen

8 days ago

I'm regranting 40k to this on the basis that I think there are few people thinking seriously about how to solve AI alignment and Thane seems to be one of them, and I have been impressed by a bunch of Thane's LW posts.

I think that my median case view is that this doesn't end up mattering because this agenda seems very hard and unlikely to pan out. Nevertheless, I think it's big if true, and relatively cheap to fund, and pretty neglected (relative to empirical ML stuff) at the current margin. And I'd guess that this is the type of grant that's relatively less legible to most grantmakers, and so I wouldn't expect it to get funded via default sources (or for it to be dramatically underfunded).

I think it'd be pretty silly if Thane (and others like them) don't move to the US and collaborate in person with people that would speed them up a lot due to funding bottlenecks.

Acausal research and interventions
Thomas avatar

Thomas Larsen

5 months ago

Clarification: this work doesn't get funded by Good Ventures. OP may still recommend grants of this type to non-Good Ventures donors. In practice, this means the pool of possible funding is still much smaller, so I think the argument stands.

Acausal research and interventions
Thomas avatar

Thomas Larsen

5 months ago

I think this is very promising. This team seems to have some of the people who have the clearest thoughts in the world about acausal interactions. I've asked several people who I trust a lot in this space and gotten universally positive references about the team.

My main concern is that thinking about acausal interactions is extremely difficult (meaning that zero progress is somewhat likely) and sign uncertain (so, even if they did make progress, it's not clear this would be net helpful). Overall, my view is that it still seems good to have some people working on this, and I trust this team in particular to be thoughtful about the tradeoffs.

Also, this type of work doesn't get funded by OP.

Lightcone Infrastructure
Thomas avatar

Thomas Larsen

6 months ago

I'm donating at least $10k (via Manifund).

I think Lightcone has been hugely impactful (see the above, which I think is very compelling evidence of a large amount of impact), and can't get funded from many of the usual places.

COI Note: Lightcone contracted with AI Futures Project (the org I work for) on AI 2027, and is continuing to contract with us on followup work. This is a donation for general support, and not anything like payment for services received.

AI Digest
Thomas avatar

Thomas Larsen

6 months ago

I'm donating $10k, and I think there's a good chance I'll come back and end up donating more.

I think that AI Digest has done great work and will be able to put marginal funding to good use. Stuff I'm particularly excited about in the past:

  • The Agent Village. I think getting real world experience seeing how AI agents will interact with tasks outside the narrow confines of a benchmark suite is a neglected form of capability evaluation.

  • Various explainers, especially the one of the METR graph.

Main reservations:

  • Nothing jumps out to me.

Conflicts of Interest:

No COIs.

Transactions

ForDateTypeAmount
Synthesizing Standalone World-Models8 days agoproject donation40000
Acausal research and interventions5 months agoproject donation30000
AI Digest6 months agoproject donation10000
Lightcone Infrastructure6 months agoproject donation10000
Manifund Bank8 months agodeposit+100000