8

AI Policy work @ IAPS

ActiveGrant
$10,050raised
$388,000funding goal

Project summary

Money from this project will be spent on funding for IAPS to allow us to make two senior research hires that will do important policy work and mentor our existing team and future fellows program, making us stronger.

What are this project's goals and how will you achieve them?

Our goals are to reduce risks related to development & deployment of frontier AI systems. We do this by producing & sharing research that grounds concrete recommendations in strategic considerations, and by strengthening coordination and talent pipelines across the AI governance field.

We aim to bridge the technical and policy worlds. We have expertise on frontier models and the hardware underpinning them, and staff in San Francisco, DC, London, Oxford, and elsewhere. We do both deep research driven by our research agendas and rapid-turnaround outputs or briefings driven by decision-makers’ immediate needs.

We focus on four different areas:

  • US AI standards, regulations, and legislation. We aim to answer questions such as how an agency to regulate frontier AI should be set up and operate, and how AI regulations could be updated rapidly yet still in well-informed ways. Our methods include drawing lessons from regulation in other sectors.

  • Compute governance. We aim to establish a firmer empirical and theoretical grounding for the fledgling field of compute governance, inform ongoing policy processes and debates, and develop more concrete technical and policy proposals. Currently we are focused on understanding the impact of existing compute-related US export controls, and researching what changes to the controls or their enforcement may be feasible and beneficial.

  • International governance and China. We work to improve decisions at the intersection of AI governance and international governance or China. We are interested in international governance regimes for frontier AI, China-West relations concerning AI, and relevant technical and policy developments within China.

  • Lab governance. We work to identify concrete interventions that could improve the safety, security, and governance of frontier AI labs’ systems and that could be implemented through voluntary lab commitments, standards, or regulation. Currently we are focused on pre-deployment risk assessments, post-deployment monitoring and incident response, and corporate governance structures.

Examples of our public work are available here: https://www.iaps.ai/research-and-blog

How will this funding be used?

Funding would go towards hiring new research staff. Our goal would be to make more senior-level hires to add more mentorship capacity to our organization, as our current team skews more junior and we are also aiming to add a fellowship that would bring in even more junior staff.

We expect each staff to cost $114K in salary, $14K in tax, $11K in benefits + fees, $7K in travel and other miscellaneous expenses, and $48K in marginal operations + management + communications + outreach + finance + HR spending to support the staff and their work. This works out to ~$194K per staff member we intend to add.

We're targeting adding two such staff members if we get the relevant funding, which would be $388K total. We think adding these staff could help us execute our policy projects with more technical and policy expertise, which would enable these staff to do good work while also leveraging their knowledge across our existing team.

Who is on your team and what's your track record on similar projects?

Our current team members are listed on our team page.

Details on our track record are available upon request - please email peter@iaps.ai for details.

What are the most likely causes and outcomes if this project fails? (premortem)

We could fail to find a hire that we're satisfied with, though our prior hiring round brought us a talented researcher with significant cybersecurity and compute governance expertise and we expect to be able to find future talent, especially given the large amount of attention on AI.

The hire(s) could fail to work out for interpersonal reasons, which does happen unfortunately from time to time. However, we have designed our selection process from experience (having hired 100+ people over 5+ years) to minimize these risks and have concrete HR + management steps we can take if this does arise.

We could fail to put these hires towards impactful work, though we vet project ideas through a significant multi-month project selection phase including a lot of engagement with our network to ensure that the work we do will be timely, relevant, and useful to policymakers. Details on this are available upon request - please email peter@iaps.ai for details.

What other funding are you or your project getting?

The money requested in this Manifund app is not currently covered by alternative sources. The remainder of our organizational budget is funded by our existing reserves, the Survival and Flourishing Fund, and other foundations.

Austin avatar

Austin Chen

11 months ago

Approving this project as appropriate under our charitable mission in the cause area of AI Governance. It's good to see the endorsements from @ZachSteinPerlman and tentatively @MarcusAbramovitch, as two people who I think are very clued-in to this space!

ZachSteinPerlman avatar

Zach Stein-Perlman

11 months ago

I've really liked IAPS's past research. I'm very excited about their work. If I was a grantmaker/regrantor, I'd investigate a bit more but expect I'd be very excited to fund this.

MarcusAbramovitch avatar

Marcus Abramovitch

11 months ago

I intend to look into this on the weekend. Will comment/post in discord what I come away with.