8

General Support for the Center for AI Policy

Not fundedGrant
$0raised

Project summary

The Center for AI Policy (CAIP) is a new lobbying & advocacy organization that aims to develop and promote policies that reduce catastrophic risks from advanced AI. 

We are currently working on creating and advocating for legislation to mitigate AI risk. The legislation would establish a federal authority that implements hardware monitoring, licensing for advanced AI systems, and strict liability for extreme harms from AI systems. We also aim to raise awareness about extreme risks from AI by engaging with policymakers that are becoming interested in AI and AI safety.

What are this project's goals and how will you achieve them?

Broadly, the aim of this project is to pass legislation that mitigates AI risk. We are attempting to do this during the current congressional window. We are spending time a) researching, drafting, and optimizing a bill and b) trying to get the legislation passed by talking with relevant offices within Congress.

The bill that we wrote establishes a federal authority that implements hardware monitoring, licensing for advanced AI systems, and strict liability for extreme harms from AI systems. The authority would also monitor and estimate catastrophic risks from advanced AI development, inform Congress and the Executive Branch about frontier AI progress, and respond to AI-related emergencies. We intend to update the bill upon feedback from policy makers and drafters, as well as more AI governance research. For more detailed information about our policy ideas, you can request access to this three-pager and email info@aipolicy.us with any questions.

We think this legislation is good because 1) it would reduce the race to the bottom somewhat by requiring AGI corporations to have sufficient safety practices to receive development and deployment licenses and 2) the agency would have the ability to respond swiftly to an AI-related emergency. The lobbying organization would a) push for this policy to get it implemented and b) generally shift the policy Overton window, such that policymakers think more clearly about AI risks in general.

To get the legislation passed, we will continue to work with lobbyists to connect with relevant congressional offices and find sponsors and co-sponsors for the bill, and meet with AI policy leaders to discuss AI safety and advocate for effective regulation.

How will this funding be used?

We’d like funding to pay the salaries of 2-3 policy analysts and/or communications specialists for one year, though we might hire for 6-month contracts if we receive less funding. Here are the job descriptions:

CAIP_Job Description_Policy Analyst

CAIP_Communications Specialist_Job Description

Ideally, the yearly salary for each of these positions would be ~$120,000, to help hire top-tier talent.

Who is on your team and what's your track record on similar projects?

The initial team includes Thomas Larsen (CEO), Jason Green-Lowe (Legislative Director) and Jakub Kraus (COO). A number of additional people have been doing significant volunteering, including Akash Wasil, Olivia Jimenez, and a few others. 

Members of our team have a track record of working on technical AI safety (Thomas), AI safety field-building (Jakub), and non-AI law and policy (Jason). However, we don't have any experience in lobbying. To attempt to address this, we have hired a team of external lobbyists to assist us for at least the next 6 months, and we want to hire folks with more policy and/or lobbying experience. We are also receiving media training from an external group.

What are the most likely causes and outcomes if this project fails? (premortem)

The most likely cause of failure is that influencing the federal legislative process is extremely difficult, especially given our relative lack of connections/experience, so we may fail to pass our legislation.

There are also risks of negative externalities from the project. We think these are important (and we are trying to track them), but we still expect the project to be quite beneficial. Here’s a non-exhaustive list:

  • We receive insufficient bipartisan support for our bill, and we increase the likelihood that our efforts lead to the partisan polarization of AI safety. (To mitigate this, we are working with a professional lobbying group. We will also stop promoting our bill if we realize that it would create significant partisan divides on AI regulation.)

  • We make difficult-to-predict political blunders due to our lack of experience with lobbying. This not only reduces the effectiveness of our effort but also “poisons the well” for other groups working on AI policy efforts. (To mitigate this, we are working with a professional lobbying group and communicating regularly with other AI policy groups.)

  • The legislation we push for ends up getting implemented, but it turns out to be a bad idea, and the default legislative process would have resulted in better policies.

  • We “crowd out” another effort that would have been similar to ours but executed more effectively. We’re not aware of another effort that plans to explicitly lobby for interventions to reduce x-risk. 

This is staying at a high level, is non-quantitative, and is short. For more extensive thoughts about potential failure modes, you can email us at info@aipolicy.us.

What other funding are you or your project getting?

We received initial funding from Lightspeed Grants (as part of the venture grants program) and from some independent donors. This funding has supported our initial hires, lobbyists and lawyers, and other project-related expenses. We’ve also applied to the Survival and Flourishing Fund.

NathanYoung avatar

Nathan Young

about 1 year ago

So is the key goal here to pass federal legislation?

Thomas-Larsen avatar

Thomas Larsen

about 1 year ago

The primary goal is to get US federal legislation passed, but we also hope to shift the political Overton window around AI and preventing AI catastrophic risk.

Holly_Elmore avatar

Holly Elmore

about 1 year ago

I think CAIP is doing great work and I encourage individuals to support them beyond what Manifund can give.

(Donations to 501(c)(4) orgs are not tax-deductible, but if you actually have to give a fairly large amount before you're better off taking charitable deductions than the standard deduction, so consider that that might not make a difference. I have given 10% for years and I have never exceeded the standard deduction.)

Rachel avatar

Rachel Weinberg

over 1 year ago

Note on Manifund funding projects trying to influence legislation: based on this from the IRS, Manifund should be able to give about $225,000 to lobbying over the next 4.5 months, or ~$125,000 at the total we've currently spent. About $5k has already been offered to Holly for her AI moratorium proposal, which has yet to be approved but probably will be once we figure out an alternative grant agreement for her that doesn't make her promise to not try to influence legislation.

That's to say, we can't fully fund this unless we raise a lot more money, but we could partially fund it. Also flagging to regrantors that this has to clear some special lobbying bar, because funding this comes out of our lobbying funding pool specifically which is much smaller than the ~$1.9M we have total.

Thomas-Larsen avatar

Thomas Larsen

over 1 year ago

Thank you for this clarification, Rachel. I'm commenting to note that it's fine if this can't be fully funded, and partial funding would still be helpful. (With partial funding, we might hire fewer people or offer new hires shorter contracts. We have also applied for funding from other sources, so the sources could cumulatively allow us to reach our funding goal).