3

Forming a 501(c)(4) organization for AI safety policy advocacy/lobbying

Not fundedGrant
$0raised

Project summary

The overarching goal is to create a highly coordinated 501(c)(4) AI Safety policy organization to lobby and advocate for policies that will:

  • Buy AI alignment researchers more time to solve the alignment problem

  • Put safeguards in place to prevent the development and deployment of harmful AI. 

The proposed organization ideally will be structured with teams focused on Sacramento and Washington, D.C. that will coordinate together on a multi-pronged strategy for lobbying and advocacy. 

Primary Objectives: 

  • Policy research synthesis and formulation

  • Partnership development

  • Policy advocacy/lobbying

  • Organizational effectiveness analysis

Policies to Investigate/propose: 

  • Implementing software export controls

  • Regulatory institution for licensing/watchdog purposes

  • Requiring hardware security features on cutting-edge chips

  • Regulation/accountability deals for use and export of EUV chips/machines

  • Requiring a license to develop frontier AI models

  • Requirements for testing and evaluation and certification

  • Standards for identifying AI-generated content 

  • Funding AI safety/alignment, interpretability, and model evaluation research/development

  • Requiring certain kinds of AI incident reporting

  • Clarifying the liability of AI developers for concrete AI harms

  • External red teaming

  • Cyber security auditing requirements

  • International collaboration frameworks

This grant would cover my personal salary while working to establish the organization and the basic starting costs to form the organization. 

Project goals

-Form a strong board of directors

-Find a partner to lead the D.C. team

-Select an office space in Sacramento 

-Incorporate as a 501(c)(4) organization

-Establish an engaging web presence

-Recruit hires 

-Build strong relationships with stakeholder organizations and legislators

-Establish policy priorities and lobbying strategy based on AI x-risk reduction and political feasibility

-Prepare for 2025-2026 California legislative session

How will this funding be used?

-Personal income 40%

-Taxes 15%

-Incorporation and legal/consultant fees 30%

-Office space- 15%

What is your (team's) track record on similar projects?

I started in the public health field, and over the last few years, focused-in on AI safety. Now I am working to bridge the gap between AI safety organizations, AI policy organizations, and the government to foster coordination and increase our chances of mitigating AI x-risk.

Non-profit 

  • Utah Pride Center: Front desk office management and resource database creator 

  • African Impact: Instructed various community courses (e.g., HIV prevention)

  • Mothers Without Borders: in 3-person office; social media, event planning, fundraising, program evaluation 

  • Best Buddies International: Secretary; event planning, taking minutes, data tracking

  • Non-profit Evaluation Mentee and Teaching Assistant: 2-year mentorship from Dr. Ben Crookston on nonprofit evaluation for impact/effectiveness; assisted in instructing and guiding university students through evaluating nonprofit organizations. 

Advocacy

  • Utah Autism Coalition: Event Chairperson; planned events and outreach initiatives aimed at legislators, advocated for increased resources to improve health outcomes for autistic children in Utah

  • American Cancer Society Cancer Action Network (ACS CAN): advocated for health policy changes regarding cancer research, funding, and health resources in Utah

  • Actively contributed to policy reform discussions with renowned figures such as Laurie Garrett and Jon Huntsman Jr. 

  • Attended legislative committee meetings to gain insight on procedure 

  • Advocated for policy changes in my university, leading to significant policy reforms 

Policy and Research

  • Global Truth Commission Index: Analyzed policy, conducted coding and weighting of variables, utilized human-centered design principles to create truth commission protocols, assessed final reports released by truth and reconciliation commissions, and collaborated with colleagues to provide recommendations for truth commissions in the context of Colombia establishing their nation’s truth commission.

  • Master’s Thesis: evaluated framing of HIV news and department of health content in the Philippines and barriers to HIV prevention.

  • Took multiple courses related to policy and research methodology at the undergraduate and graduate level, and received research mentorship from professors at my undergraduate and graduate universities. 

How could this project be actively harmful?

  • Lack of coordination between other actors in the AI policy space and unintentional conflicting moves, but I am actively mitigating this by careful coordination and participating in meetings with stakeholders and individuals with similar goals. 

  • Starting a 501(c)(4) that unintentionally poisons the well in the AI lobbying space (making it more difficult to make progress in the space later on), but I am mitigating this by consulting with AI policy and lobbying specialists, maintaining a careful and coordinated strategy with other people in the lobbying space, and aiming to build and maintain credibility for the organization. 

  •  Promoting legislation that appears helpful, but policies end up more performative than helpful at reducing x-risk. I am mitigating this by consulting with AI policy organizations and university AI policy hubs to create checks to have the highest chance at impact. 

What other funding is this person or project getting?

No funding has been confirmed yet, but I have applied for a larger grant for the organization as a whole through Lightspeed Grants.

havenworsham avatar

Haven (Hayley) Worsham

over 1 year ago

The grant agreement is more direct about not engaging in lobbying activities/attempting to influence legislation. While in the 4 month duration of the grant I likely wouldn't be doing direct lobbying, that is eventually the direction the org would be going, so I understand if my grant app is disqualified because of that. I'm curious, would a grant for salary be considered more in-line with the grant agreement policy, compared to using funds for things like fees for incorporation? If that is the case, I can decrease the ask amount on my grant app and not include incorporation under the scope.

Austin avatar

Austin Chen

over 1 year ago

Hi Haven, thanks for submitting your application! I like that you have an extensive track record in the advocacy and policy space and am excited about you bringing that towards making AI go well.

I tentatively think that funding your salary to set up this org would be fairly similar to funding attempts to influence legislation (though I would be happy to hear if anyone thinks this isn't the case, based on what the IRS code states about 501c3s). That doesn't make it a non-starter for us to fund, but we would scrutinize this grant a lot more, especially as we'd have about a ~$250k cap across all legislative activities given our ~$2m budget (see https://ballotpedia.org/501(c)(3))

Some questions:

  • Where do you see this new org sitting in the space of existing AI Gov orgs? Why do you prefer starting a new org over joining an existing one, or working independently without establishing an org at all?

  • Have you spoken with Holly Elmore? Given the overlap in your proposals, a conversation (or collaboration?) could be quite fruitful.

havenworsham avatar

Haven (Hayley) Worsham

over 1 year ago

@Austin Thank you for your comment!

This org is proposed to be a 501(c)(4) org, while the stakeholder AI gov orgs are typically 501(c)(3) orgs, so they lack the capability to lobby, advocate, and comment on legislation in the same way a 501(c)(4) could. Basically, the model would be that this new org collaborates with the 501(c)(3) orgs that are doing AI policy and safety research, and we would synthesize the recommendations from these orgs, prioritize recommendations that meet a certain weighted criteria based on expected x-risk reduction, level of consensus among the stakeholder AI gov/safety orgs, and political feasibility, create a lobbying/advocacy strategy, carry out the lobbying/advocacy strategy for policy recommendations to legislators, and have routine process/impact/outcome evaluation for quality improvement and accountability purposes.

As a 501(c)(4), we will also have flexibility to adjust our strategy if AI becomes more of a voter issue and be able to advocate for positions that the 501(c)(3) orgs would technically hold, but can't really comment on (e.g., creating voter guides that compare various candidates stances on AI safety issues or translating complex ballot measures about AI into easier to understand materials), and we can make sure that the AI safety/preventing x-risk position is represented.

Additionally, I'd like the ability to lobby in Sacramento, and I'd prefer to have the org incorporated in California to be able to do that without the risk of being seen as an outside/out-of-state organization trying to influence local politics.

I'm currently coordinating with various individuals/orgs in the AI safety/policy space to make sure that we are not all doing the same thing (or things that accidentally conflict) in setting up a 501(c)(4) and deciding how/if to merge, etc. I have not talked to Holly yet, but I would be interested in doing so. It seems like our strategies may be different, but that doesn't mean they are necessarily incompatible. In potentially merging strategies, I want to be upfront that I am trying to be deliberate/cautious in being more of a lobbying/advocacy organization rather than an activist group/org. I can go into more detail on what I think the difference is, and why I think this difference is important in my theory of change in setting up an org that acts as somewhat of a liasion/mediator/link between AI safety/policy orgs and legislators if that would be helpful.

Holly_Elmore avatar

Holly Elmore

over 1 year ago

@havenworsham Hi Haven! I'd be very interested in talking :) https://calendly.com/holly-elmore/30min