3

Medical Expenses for CHAI PhD Student

CompleteGrant
$23,043raised

Project summary

Rachel is a technical AI safety researcher and PhD student at the Center for Human-Compatible AI (CHAI) at UC Berkeley. She researches problems with reward learning as a path to AGI alignment and mentors promising junior researchers through CHAI internships and one-off advising for 80,000 Hours.

This grant will cover her medical and productivity expenses so that she can work full time on technical AI safety research & mentorship. She has had health problems since 2020, and because of her low stipend and inflexible choice of insurance, has been struggling to pay for the necessary medical care. This has already reduced her productivity and the time she can devote to her highest-impact work, and will continue to do so if these expenses aren’t covered.

Project goals

Without additional funding, Rachel would have to take up additional work in EA operations or tutoring, or more likely, switch to a full-time internship in tech or finance for a few months, pulling her away from her much higher-impact AI safety work. This grant is meant to help her devote more time to her highest-impact work.

Additionally, getting upfront funding may allow her to undergo treatments and recover more quickly, which means she can work more total. Finally, this will reduce her financial stress, which could otherwise cause burn out or at the very least distraction, also diverting her from her work.

How will this funding be used?

In order of priority, so the top of the list is covered first, and the lower items get covered as more money is received:

  • $15,000: medical expenses

  • $10,000: buffer for miscellaneous and unanticipated medical costs

  • $9,000: timesaving services like a PA (who would do primarily medical-related admin work) & experiments with promising health interventions like sleep study or physical therapy

  • $9,600: taxes for this grant

  • additional funding: meal delivery services, personal cleaning, and other time-saving services.

In her Nonlinear application, Rachel requests $43k ($34k post-tax), and estimates that she could productively use up to $58k ($45k post-tax).

How could this project be actively harmful?

Don’t see how it could be.

What other funding is this person or project getting? Where else did this person or project apply for funding in the past?

Rachel received a Nonlinear Support Fund grant, which reimburses up to $10k for coaching, tutoring, and other services that do not overlap with the ones being funded by this grant.

donated $10,000
evhub avatar

Evan Hubinger

about 1 year ago

Main points in favor of this grant

I don't have a strong take on how good Rachel's current research is, but she's clearly doing relevant work and it seems high-impact to cover medical expenses to let her keep doing that if doing so is cheap.

Donor's main reservations

I am more confident that covering medical expenses is good than I am that other timesaving services such as a PA will be good.

Process for deciding amount


I have committed $10k for now, but would be willing to commit more if @rachelfreedman identifies to me that the current amount is insufficient to cover her medical expenses. Based on current committed funding, it looks to me like she may or may not have enough to do that depending on how much buffer is necessary. If the current amount is insufficient, I would likely be willing to put in more.

Conflicts of interest

None.

MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

I saw this a couple weeks ago and thought it was a really good idea. Main reason I didn't fund it is because there are things I want to do more. Would be a real shame if the EA community couldn't step in and do this.

Austin avatar

Austin Chen

over 1 year ago

Thanks for the writeup, Rachel W -- I think paying researchers in academia so they are compensated more closely to industry averages is good. (It would have been helpful to have a topline comparison, eg "Berkeley PHDs make $50k/year, whereas comparable tech interns make $120k/year and fulltime make $200k/year")

I really appreciate Rachel Freedman's willingness to share her income and expenses. Talking about salary and medical costs is always a bit taboo; it's brave of her to publish these so that other AI safety researchers can learn what the field pays.

Other comments:

  • We'd love to have other regrantors (or other donors!) help fill the remainder of Rachel Freedman's request; there's currently still a $21k shortfall from her total ask.

  • Rachel W originally found this opportunity through the Nonlinear Network; kudos to the Nonlinear folks!

donated $13,000
Rachel avatar

Rachel Weinberg

over 1 year ago

Main points in favor of this grant

Rachel’s work looks really valuable to me. I don’t have strong inside views on technical AI safety research paths, but there are a bunch of positive signals: she’s at a top PhD program, so unlike many other TAIS researchers, she has received high quality mentorship and works with strong teams. She’s published papers with other well-regarded researchers, some of which have gotten into top journals, and others have won safety-specific rewards. She also works with interns at CHAI and holds office hours for aspiring AI safety researchers, which were so successful that 80k began to fund and recommend them. The world is really short on good AI safety researchers, and possibly even more short on good AI safety researchers who can offer mentorship to produce more good AI safety researchers. Rachel is both.

She’s also extremely under compensated for this work at the moment. Her monthly term-time take-home income in the past year was about $2,800, and she lives in a high-cost area. Her lab cannot offer her higher compensation or better insurance. This type of work is easily worth a 6-figure salary, and probably worth a 7-figure salary, so paying $13,000 for a ~5% increase in productivity over a year seems like a good price.

Donor's main reservations

None really. In the worst case, it makes no impact on her work but helps her personally, which both seems unlikely and not that bad relative to the potential downsides of other longtermist grants. In the best case this makes the difference between a good researcher staying in what I think is the most important field, and her leaving it.

Process for deciding amount

I initially offered $10k to cover her baseline medical expenses, in hopes that other donors would chip in a bit more as well. Then, after some discussion with Rachel about the costs that transparency imposed on her and the positive externalities I think it produces, we settled on $13k because this grant and her identity are public.

Mostly I didn’t offer $15-30k initially just because I only have $50k total to give out over 6 months and it’s been a week.

Conflicts of interest

None.

donated $13,000
Rachel avatar

Rachel Weinberg

over 1 year ago

Noting that I raised the funding goal from $34k to $43.6k to account for the ~22% tax that Rachel will have to pay on this grant.

donated $13,000
Rachel avatar

Rachel Weinberg

over 1 year ago

Raised the funding goal again to $58k, since that's the maximum amount that Rachel thinks she could use productively.