1

Funding a Consortium of Legal Scholars and Academics doing AI Safety

🦁

Peter Salib

Not fundedGrant
$0raised

Project summary

Given the movement around AI legislation and regulation, a number of legal academics have become interested in AI safety. After a previous AI safety x legal scholarship workshop, they’ve formed a consortium and are interested in helping build the field further. 

One of their goals is to initiate a symposium on AI safety at a top law review. Such a symposium would include an RFP for legal scholarship on AI safety as well as an in-person event for the papers which end up being selected. If this was successfully accomplished at a top law review, that would lend a lot of credibility and prestige to AI safety in legal academia and would pave the way for future field-building efforts. 

This regrant seeks to cover the costs of such a symposium, which also increases the likelihood that it is accepted.

What are this project's goals and how they be achieved?

The project’s goal is to initiate a symposium on AI safety at a top law review. The consortium will leverage their connections and reach out to the editorial boards of different law reviews and pitch the opportunity.

How will this funding be used?

Funding will cover the costs of the symposium, specifically the flights and hotel stays of the academics attending.

Who is on the team and what's their track record on similar projects?

Yonathan Arbel is an Associate Professor of Law at the University of Alabama. 

Peter Salib is an Assistant Professor of Law at the University of Houston Law Center.

Kevin Frazier is an Assistant Professor of Law at St. Thomas University.

All three have published papers in prestigious legal journals and understand the norms and customs of legal academia.

What are the most likely causes and outcomes if this project fails? (premortem)

The main failure mode would be if no top-tier law review is interested in doing a symposium on AI safety. This might happen if interest in AI safety isn’t high enough. If this is the case, the consortium has discretion to use the funds for a separate project, so long as that project is focused on reducing AI x-risk by increasing the prominence of AI safety in law. As an example, these funds could be used in a prize competition.

What other funding is this person or project getting?

None that I'm aware for the legal symposium.

Rachel avatar

Rachel Weinberg

10 months ago

Hey Peter, unfortunately Manifund won’t be able to fulfill this grant at this time. An unexpected influx of year-end regrants spent down the total pot of regrantor funding, meaning that we don't have enough left to fund a few of the last projects (like this one).

We’re so sorry if this created false expectations. Best of luck applying for funding elsewhere—hopefully Dan’s enthusiasm and support for your project will be of help, even if he couldn’t give you a grant directly.

hendrycks avatar

Dan Hendrycks

11 months ago

Main points in favor of this grant

Interest around AI and AI safety is quite high within legal academia. Getting a symposium focused on AI safety at a top law review would greatly increase AI safety’s prestige within legal academia. The AI safety consortium contains many academics who understand the norms of academia and would be good at field-building.

Donor's main reservations

As mentioned above, it’s unclear how receptive the legal reviews will be. Given interest in AI is surging, now is the best time to pitch such a review. However, acceptance is not guaranteed.

Process for deciding amount

The consortium provided an estimate for the symposium costs, which was reviewed and approved and was broadly in-line with workshop costs in other fields.

Conflicts of interest

None.