@Austin Thank you for your comment!
This org is proposed to be a 501(c)(4) org, while the stakeholder AI gov orgs are typically 501(c)(3) orgs, so they lack the capability to lobby, advocate, and comment on legislation in the same way a 501(c)(4) could. Basically, the model would be that this new org collaborates with the 501(c)(3) orgs that are doing AI policy and safety research, and we would synthesize the recommendations from these orgs, prioritize recommendations that meet a certain weighted criteria based on expected x-risk reduction, level of consensus among the stakeholder AI gov/safety orgs, and political feasibility, create a lobbying/advocacy strategy, carry out the lobbying/advocacy strategy for policy recommendations to legislators, and have routine process/impact/outcome evaluation for quality improvement and accountability purposes.
As a 501(c)(4), we will also have flexibility to adjust our strategy if AI becomes more of a voter issue and be able to advocate for positions that the 501(c)(3) orgs would technically hold, but can't really comment on (e.g., creating voter guides that compare various candidates stances on AI safety issues or translating complex ballot measures about AI into easier to understand materials), and we can make sure that the AI safety/preventing x-risk position is represented.
Additionally, I'd like the ability to lobby in Sacramento, and I'd prefer to have the org incorporated in California to be able to do that without the risk of being seen as an outside/out-of-state organization trying to influence local politics.
I'm currently coordinating with various individuals/orgs in the AI safety/policy space to make sure that we are not all doing the same thing (or things that accidentally conflict) in setting up a 501(c)(4) and deciding how/if to merge, etc. I have not talked to Holly yet, but I would be interested in doing so. It seems like our strategies may be different, but that doesn't mean they are necessarily incompatible. In potentially merging strategies, I want to be upfront that I am trying to be deliberate/cautious in being more of a lobbying/advocacy organization rather than an activist group/org. I can go into more detail on what I think the difference is, and why I think this difference is important in my theory of change in setting up an org that acts as somewhat of a liasion/mediator/link between AI safety/policy orgs and legislators if that would be helpful.
$0 in pending offers
I started out on the Global Health side of EA and transitioned to AI Governance after spending the last few years studying more into AI x-risk and gaining appreciation for the necessity of AI safety.
My background is in nonprofit evaluation, research, advocacy, and community engagement. My current focus centers around AI safety policy research, advocacy, and global cooperation for AI safety and alignment to increase the likelihood of a flourishing future where humanity and AI can collaborate safely together.
B.S. in Public Health Promotion
Brigham Young University
M.Sc in Global Public Health
Needs assessments, program evaluation, program planning, research, advocacy, social media outreach, grant writing, event planning, fundraising, community education
"What We Owe The Future: A Buried Essay"
EA Satirical Blog Post: “How to Survive EAG: San Francisco FOMO”
@Austin Thank you for your comment!
The grant agreement is more direct about not engaging in lobbying activities/attempting to influence legislation. While in the 4 month duration of the grant I likely wouldn't be doing direct lobbying, that is eventually the direction the org would be going, so I understand if my grant app is disqualified because of that. I'm curious, would a grant for salary be considered more in-line with the grant agreement policy, compared to using funds for things like fees for incorporation? If that is the case, I can decrease the ask amount on my grant app and not include incorporation under the scope.