hendrycks avatar
Dan Hendrycks

@hendrycks

regrantor

Executive Director of the Center for AI Safety

danhendrycks.com/

Donate

This is a donation to this user's regranting budget, which is not withdrawable.

Sign in to donate
$276,760total balance
$250,110charity balance
$26,650cash balance

$0 in pending offers

Projects

Outgoing donations

Comments

hendrycks avatar

Dan Hendrycks

11 months ago

Main points in favor of this grant

Interest around AI and AI safety is quite high within legal academia. Getting a symposium focused on AI safety at a top law review would greatly increase AI safety’s prestige within legal academia. The AI safety consortium contains many academics who understand the norms of academia and would be good at field-building.

Donor's main reservations

As mentioned above, it’s unclear how receptive the legal reviews will be. Given interest in AI is surging, now is the best time to pitch such a review. However, acceptance is not guaranteed.

Process for deciding amount

The consortium provided an estimate for the symposium costs, which was reviewed and approved and was broadly in-line with workshop costs in other fields.

Conflicts of interest

None.


hendrycks avatar

Dan Hendrycks

11 months ago

Main points in favor of this grant

Despite AI capabilities quickly progressing towards human or superhuman level, the dynamics of an intelligence explosion or automated AI R&D haven’t been very thoroughly explored. If an intelligence explosion were to happen, humans would likely quickly lose control of the process, by default, unless precautions had been setup beforehand. 

Paul Salmon has previously published highly impactful work in safety engineering and is familiar with the type of systems analysis needed to do this research. Paul is also interested in AI safety, having previously published on the topic of AGI risks.

Donor's main reservations

Whether agent-based models is the right approach remains to be seen.

Process for deciding amount

The amount regranted was comparable to other grants in the field.

Conflicts of interest

I will be helping with this project as well.

hendrycks avatar

Dan Hendrycks

11 months ago

Main points in favor of this grant

Removing hazardous capabilities from models would greatly help reduce AI x-risk from malicious use and unilateral actors. Alex is a researcher with a strong track record who is interested in AI safety and has done previous AI safety research. The timing is right; NIST has been tasked by the recent EO to help develop standards and regulations on “dual-use foundation models.” Research now has a much higher likelihood of helping shape regulation.

Donor's main reservations

This is a relatively complex project with many moving parts. It’s crucial that the project is executed well on a relatively short timeline.

Process for deciding amount

It was estimated by the researchers that this was the total amount needed for the dataset. I have reviewed the budget and approved.

Conflicts of interest

I will be helping advise this project.

Transactions

ForDateTypeAmount
Research Staff for AI Safety Research Projects3 months agoproject donation+150
<70d11eb1-2a41-4bd2-9376-b5a7c969e541>5 months agoprofile donation+100
Research Staff for AI Safety Research Projects5 months agoproject donation+500
Research Staff for AI Safety Research Projects5 months agoproject donation+500
Research Staff for AI Safety Research Projects5 months agoproject donation+500
Research Staff for AI Safety Research Projects5 months agoproject donation+25000
Manifund Bank7 months agodeposit+250000
Manifund Bank7 months agoreturn bank funds210000
<d9f98ed4-417f-431f-ae33-81b538b1c3dd>8 months agoprofile donation+10
Removing Hazardous Knowledge from AIs10 months agoproject donation190000
Manifund Bankover 1 year agodeposit+400000