1

Relocating to Montreal to work full time on AI safety

ActiveGrant
$10,000raised
$10,000funding goal
Fully funded and not currently accepting donations.

Project summary

We, Damiano and Pietro, will be joining Yoshua Bengio's safety team at Mila as a postdoc and a PhD student, respectively. We will be visiting Montreal for a few months before we officially start, while we apply and wait for our visas. The bureaucratic situation won't allow us to rent an apartment, so accommodation will be more expensive. Accordingly, we are asking for support for the relocation costs.

What are this project's goals? How will you achieve them?

The aim is to ease the financial burden of moving overseas while waiting for our visas.

How will this funding be used?

  • Accommodation;

  • Flights;

  • Other relocating expenses (e.g., shipping of personal items).

Who is on your team? What's your track record on similar projects?

Yoshua Bengio is our supervisor. Turing Award winner, vocal and committed to AI safety.

What are the most likely causes and outcomes if this project fails?

N.A.

What other funding are you or your project getting?

Currently, Damiano has a salary as a PhD student, Pietro has no salary. We will both have salaries at Mila.

Austin avatar

Austin Chen

3 months ago

Approving this grant to support Damiano and Pietro's further work on AI safety research. This follows a previous $60k grant made by Evan Hubinger, for the two to work on a paper on agency and (dis)empowerment.

donated $10,000
AdamGleave avatar

Adam Gleave

3 months ago

Donated as this seems like a very leveraged grant. In-person interaction is important when starting work on a new research agenda, and this effectively gets 3 months * 2 people = 6 person-months more of that for a relatively low cost.

Damian and Pietro have relevant experience and I expect to execute well on this project.

My main hesitation is I feel skeptical of the research direction they will be working on (theoretical work to support the AI Scientist agenda). I'm both unconvinced of the tractability of the ambitious versions of it, and more tractable work like the team's previous preprint on Bayesian oracles is theoretically neat but feels like brushing the hard parts of the safety problem under the rug (into e.g. the safety specification). However, enough people are excited by this direction that I feel inclined to support high-leveraged exploratory work in this area to see if the agenda can be refined.