🐌
Joshua David

@JoshuaDavid

$160total balance
$160charity balance
$0cash balance

$0 in pending offers

Outgoing donations

Comments

🐌

Joshua David

over 1 year ago

Establishing an AI safety lab at Oxford seems like a good idea in general, and I expect that research which focuses on mechanistic interpretability is particularly likely to yield concrete, meaningful, and actionable results.

Additionally, Fazl has a track record of competence in organizational management, as shown by his contributions to Apart Lab and his organizational work for the Alignment Jam / Interpretability Hackathon.

Disclaimer: My main interactions with Fazl, and my impressions above, were through Interpretability Hackathon 3 and subsequent discussions, and that is how I heard about this manifund.

Disclaimer: I do not specialize in grant-making in an impact market context - my donation should be interpreted as an endorsement of an AI safety lab existing at Oxford being a net positive, not as an intentional bid to change market prices.

Transactions

ForDateTypeAmount
Impact Accelerator Program: Biggest career program for experienced professionals7 months agoproject donation10
Act I: Exploring emergent behavior from multi-AI, multi-human interaction7 months agoproject donation20
Lightcone Infrastructure7 months agoproject donation10
Manifund Bankover 1 year agodeposit+200