AdamGleave avatar
Adam Gleave

@AdamGleave

regrantor

CEO & co-founder at FAR AI, a trustworthy AI non-profit; PhD AI UC Berkeley 2022; LTFF fund manager 2020-2022

https://www.gleave.me/
$364,470total balance
$364,470charity balance
$0cash balance

$0 in pending offers

About Me

I am the CEO and co-founder of FAR AI, an alignment research non-profit working to incubate and accelerate new alignment research agendas. I received my PhD from UC Berkeley under the supervision of Stuart Russell. I was fortunate to be part of the Center for Human-Compatible AI during my PhD and funded by an Open Philanthropy Fellowship. My grantmaking experience includes being a Fund Manager at the Long-Term Future Fund between Feb 2020 and Jan 2022. Please see my CV for a more comprehensive list of my prior experience.

Outgoing donations

Comments

AdamGleave avatar

Adam Gleave

about 2 months ago

Main points in favor of this grant

  • Promising research idea; "obvious next step" but not one that anyone else seems to be working on.

  • Can Rager has relevant research experience.

  • David Bau's lab is a great place to do this kind of work.

Donor's main reservations

  • Limited track record from Can.

  • Research project is high-risk, high-reward.

Process for deciding amount

  • $6000-$9000/month seems to be around the going rate for junior independent research based on previous LTFF grants. I went on the higher end as: (a) stipend may need to pay for office expenses not just living expenses; (b) Can intends to be based in the Bay Area for some of this time, a high cost-of-living location.

Conflicts of interest

Can may spend some of his stipend on a desk & membership in FAR Labs, an AI safety co-working space administered by the non-profit FAR AI that I am the founder and CEO of. This is not a condition of this grant, and I have encouraged Can to explore other office options as well. I do not directly benefit financially from additional members at FAR Labs, nor would one member materially change FAR AI's financial position. No other conflicts of interest.


AdamGleave avatar

Adam Gleave

8 months ago

Typo: salary is $91,260 annualized not $92,260.

AdamGleave avatar

Adam Gleave

8 months ago

Main points in favor of this grant

There's been an explosion of interest in Singular Learning Theory lately in the alignment community, and good introductory resources could save people a lot of time. A scholarly literature review also has the benefit of making this area more accessible to the ML research community more broadly. Matthew seems well placed to conduct this, having already familiarized himself with the field during his MS thesis and collected a database of papers. He also has extensive teaching experience and experience writing publications aimed at the ML research community.

Donor's main reservations

I'm unsure how useful Singular Learning Theory is going to be for alignment. I'm most unsure whether it'll actually deliver on the promise of better understanding deep networks. The positive case is that traditional statistical learning theory has some serious limitations, making predictions that contradict empirical results on deep networks, so we need some replacement. But grandiose theories pop up now and again (the neural tangent kernel was hot last year, for example) yet rarely pan out. Singular learning theory has been around for several decades, so that it only recently gained popularity in ML should also give some pause for thought. It seems plausible enough and enough people are excited by it what I'm willing to give it a shot for a relatively small grant like this, but this grant is definitely not me endorsing singular learning theory -- I'd need to understand it a lot better to really give an inside-view evaluation.

Conditional on singular learning theory actually enabling deeper understanding of neural networks, there's still a question of it that's actually useful for alignment. I feel reasonably confident that it would be a positive development: generally having theoretical frameworks to engage with (even if approximate) seems a key component of engineering systems with strong guarantees. Whereas just making something that works well most of the time is much more tractable via a trial-and-error approach. So, understanding seems to differentially help building reliable systems than just systems that mostly work. But, understanding does accelerate both -- so there is a non-trivial backfire risk.

Process for deciding amount

Fully funded Matthew's ask, which amounts to $92,260/year annualized. The salary seems reasonable given his experience level. It's higher than US PhD stipends (~50k/year), but below that of most alignment research non-profits in the SF Bay Area (LCA filings from Redwood show at least $140k/year for an ML Researcher; FAR AI's pay scale is $80k-$175k/year for Research Engineers) and significantly below for-profit tech jobs. Matthew will be working from Australia where tech salaries are lower; Levels.fyi gives a median of $54k/year USD total comp, but short-term contractor positions are often up to 2x that of salaried employees, so I still consider the ask to be in a reasonable range.

Not directly relevant in this grant, but I generally would advocate for independently conducted research to receive lower compensation than at alignment organizations, as I usually expect people to be significantly more productive in an organization where they can receive mentorship (and many of these organizations are at least partially funding constrained).

Conflicts of interest

Please disclose e.g. any romantic, professional, financial, housemate, or familial relationships you have with the grant recipient(s).

I supervised Matthew for an internship in 2021 at CHAI; I have continued collaborating with him (although relatively light-touch) to see that project through to publication.