1

AI Safety Reading Group at metauni [Retrospective]

ActiveGrant
$815raised
$1,000funding goal

Summary of project. Metauni is an open 'metaverse university', a small community of scholars that meet in 3D virtual environments. From June 2022 to February 2024, myself and Dr. Daniel Murfet ran a weekly AI Safety Reading Group there. The reading group had 3 to 5 attendees each week, and occasionally guest presenters. We mainly discussed and critiqued technical papers on AI safety, though also sometimes philosophical papers or papers on AI ethics. A full list of readings is available at the group webpage.

Retrospective grant. I am applying for retrospective funding for this project as part of the EA Community Choice funding round. Since the reading group didn't have many attendees, I think it will draw limited funding. However, this project had a pretty sizable impact on at least my career, and I think also Daniel's and those of some of the other attendees. It helped our local community orient towards the field. I plan to participate in the Community Choice round and among other projects that have been impactful in my career I want to direct some of my funding towards this project because it had a comparable impact.

Use of funding. To be clear, the reading group is not currently active, and there are not currently plans to re-open it. Any funding will be shared between the organisers (myself and Daniel Murfet), and presumably contribute to their future projects. This is not about funding specific future projects nor reviving the metauni reading group, although the group may re-open organically at some point in the future (or it may not).

Team. The team comprises the following two reading group directors:

  • Matthew Farrugia-Roberts (me). I am a freelance AI safety researcher and incoming DPhil candidate at Oxford. I currently contract with Timaeus and the Krueger AI Safety Lab.

  • Daniel Murfet. Daniel is a mathematician at the University of Melbourne who researches theoretical foundations of interpretability and alignment of deep neural networks. Daniel also collaborates with Timaeus.

Most likely causes/outcomes of failure. This is not really applicable since the funding is retrospective.

Other sources of funding. The project has received no other funding and was entirely driven by the volunteer contributions of the participants.

🥦

Jord Nguyen

donated $50
about 1 month ago
about 2 months ago
donated $100
about 2 months ago
2 months ago
2 months ago
donated $10
2 months ago
2 months ago
donated $40
2 months ago
donated $150
2 months ago
2 months ago
2 months ago