1

Create ‘Responsible AI Investing’ recommendations for institutional investors

Not fundedGrant
$0raised

Project summary

I want to draft and promote a report that encourages investors to advocate for the specific best practices and safety measures from this GovAI expert survey (“Towards Best Practices in AGI Safety and Governance”) when engaging with chipmakers and tech firms with AI labs. Provisionally, the report will:

  1. Encourage investors to see AI as an area in which responsible practices are essential;

  2. Recommend that investors in relevant chipmakers and tech firms with AI labs advocate for the adoption of the GovAI practices, and corresponding disclosures/monitoring;

  3. Recommend that public equity investors consider voting (and publicly announcing commitments to vote) against the management of companies which do not meet minimum safety and governance standards, or which substantially lag their peers in adoption;

  4. Argue that best practices in AI do not always trade off against profits.

My focus may shift in response to developments in the corporate policy landscape (e.g. recent voluntary commitments from AI labs facilitated by the Biden admin), new governance proposals in the AI governance/safety communities, and feedback from relevant actors in the investment industry. For example, it may focus more on disclosures, or take a narrower or broader approach to safety measures. I will plan to test different approaches iteratively, and invest a lot of my time in seeking feedback.


In addition to drafting the report, I’ll plan to promote it. I’m not sure exactly which routes will be most important, but I expect to seek meetings with asset managers and institutional investors, AI investors, and ESG professionals. I’ll also consider producing spin-off pieces, writing op-eds or articles, attending or speaking at conferences, running webinars, and paid LinkedIn ads aimed at relevant people. It may also make sense to promote this under the brand of an existing or new (e.g. “Group for Responsible AI Investment”) nonprofit organization. I’ll plan to spend one month initially solely on promoting the report.


This is neglected. AI investment and ESG investment have both grown tremendously and there are some early signs that investors are beginning to take AI risk seriously as an ESG issue. I expect these trends to lead to demand for advice on responsible AI investing. But I am not aware of any existing work on responsible AI investing that proposes specific best practices as objectives for engagement, and by default investors are unlikely to be aware of or insist on the best safety and governance practices. Shaping the conversation early may be a way of directing investor focus towards the most important safety and governance practices.

What are this project's goals and how will you achieve them?

The proximate goals of the project are (i) to produce a report on responsible investing in tech firms with AI labs and in chipmakers and (ii) to promote it. The downstream goal is that investors use this report as guidance when deciding how they will engage and what policies they advocate for. Investors can then have impact through the following channels:

  1. Investors can directly institute changes in corporate policy and governance wrt AI risks. In public equities, this can be through proxy contests, either through shareholder proposals or nomination of a dissident slate of directors. These might be viable strategies for shareholder activists at some firms if public concern on AI grows dramatically. In private equity, investors can institute changes through their board representation. 

  2. Investors’ pressure or cooperative engagement can lead corporations to voluntarily adopt best practices wrt AI risks. Even firms which have dual class shares (e.g. Meta and Alphabet) are often responsive to informal shareholder pressure.

  3. Investors publicly advocating for best practices wrt AI risks at companies they invest in might make it easier for regulators to impose those practices externally. It can help defray regulator concern that particular regulations would be an undue burden on productive activity, since investors themselves are advocating for voluntary adoption. Or it can encourage regulators to go beyond minimal measures.

  4. Investors have a powerful platform for discourse, and investor behaviour is newsworthy. Shaping investor norms can in turn shape reporting and public understanding.

I’ll be confident in the project’s impact if I observe that the report is being widely read, that investors are doing this type of advocacy (described in 1-4 above), and that investors are engaging around the specific proposals in the report.

If the report lands well, it might also yield sufficient credibility to enable me / a new org to promote take on more ambitious projects to shape AI investment. This could look like establishing a draft disclosure framework for AI along the lines of the Task Force on Climate-Related Financial Disclosures, advising existing investor initiatives, or launching a proxy advisory service.

How will this funding be used?

Here's a draft budget:

Four months of my current gross salary: 15,500

Graphic design: 500

Second monitor: 200

New laptop to replace my very slow old one: 1000 GBP

Promote the report and other expenses (including targeted LinkedIn ads): 2000

~10% buffer: 1920 GBP

Total: 21,120 GBP

Who is on your team and what's your track record on similar projects?

This project will be run by me, Zachary Brown.


I’ve written long reports / pieces of research on a deadline and without supervision before. My current role involves independently authoring cost-effectiveness analyses of social programs. And my undergraduate thesis at Oxford was written with minimal supervision (and received high marks). I expect this report to be considerably easier than e.g. my undergraduate thesis since it leans heavily on existing research. I’m familiar with the responsible investment landscape after a year informally researching shareholder activism and proxy advisors, and I am familiar with recent AI governance work.

I don’t have experience promoting a report like this, but I have developed a network of EAs in finance and impact finance people that will be useful, and I think that my skills are well suited to this.

What are the most likely causes and outcomes if this project fails? (premortem)

The most likely cause of failure is that I don't succeed in getting a report in front of sufficiently many or sufficiently important investors/asset managers, or that I do succeed in this but audiences are not sufficiently persuaded by the content of the report that it informs concrete actions (engagements with companies). In these cases, there's no impact, but low risk of harm.

I think risks of harm are smaller and more speculative. One route to harm is that the proposed safety practices are poorly selected, and turn out to be bad in hindsight. Another is that investors use these recommendations not as targets for engagement, but as exclusionary screens – in which case AI labs/chipmakers may end up with a pool of investors who are selected for not caring about safety practices.

In general, this project is also only as good as the marginal safety practice I encourage investors to advocate for: all else equal, if you are more (less) optimistic about current safety and governance proposals, you should be more (less) optimistic about the value of this project.

What other funding is are you or your project getting?

I have also submitted an application to Lightspeed Grants, but have not yet heard back.

🥕

Rocket Drew

about 1 year ago

Re promoting the report, it would be so excellent if you could get Matt Levine to report on it. His newsletter has a wide readership in finance, he writes regularly about ESG (including ESG and AI), and from what I can tell he's good on AI risk.

Holly_Elmore avatar

Holly Elmore

about 1 year ago

I am excited to read about this project! I am frequently asked for my recommendations on responsible investing and shareholder actions for AI Safety, and I don't have the expertise to give an answer. I would likely use the recommendations in the report and recommend they be used in corporate campaigns or as protest demands. There is demand for this sort of social impact for AI Safety, and I believe it could serve as a sort of low-risk "gateway" involvement with AI Safety advocacy for many people and orgs.