You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
The mission of the AI Policy Insitute (AIPI) is to channel public concern into effective governance. We engage with policymakers, media, and the public to shape a future where AI is developed responsibly and transparently. Much of our focus is on reducing the risk of humanity losing control to these systems. We pursue our goals by (1) educating federal lawmakers on issues such as frontier AI development, AGI, and extreme risks posed by these systems; and (2) engaging with the media to educate the public; and 3) conducting public opinion polls to gauge public awareness and appetite for different policies. Success looks like a large increase in AGI/ASI awareness in the US Congress and the public.
Compared to most other US organizations working on AI policy, we believe two factors set the AI Policy Institute (AIPI) apart:
We focus on AGI and loss-of-control risks (as opposed to issues such as "current harms"), and our research, polling, and educational work address these issues directly.
We've built unusual access to policymakers across both parties. We use this access to educate members of Congress and the Executive Branch on the risks from advanced AI.
Examples of our recent impact:
Expanding the conversation in Congress on AGI risk. At the start of 2025, very few people in or around Congress were discussing AGI, superintelligence, or loss of control. As of today, over 30 members of Congress across parties have publicly engaged with these topics in a non-disparaging, non-dismissive way. We claim substantial credit for a sizable portion of this shift. One inflection point: our President Mark Beall testified to Congress with remarks focused on AGI/ASI and loss of control. Transformer described the hearing as one where "lawmakers demonstrated a level of situational awareness that would have been unthinkable just months ago."
Public opinion polling that shaped the conversation. Our polling has shown that majorities of Americans across party lines support AI safety regulations and believe AI could cause catastrophic harm. We've delivered this data directly to senior policymakers in both parties, where it has been well received.
Media engagement that broadens the audience for AGI risk. Our team has discussed AI risk in mainstream and broadcast media, helping to bring the conversation to audiences that aren't already engaged with AI safety. Mark Beall co-authored a New York Post op-ed titled "We need guardrails for artificial superintelligence — before it's too late," and our team's work has been cited or quoted in outlets including NBC News, Politico, the Associated Press, and others.
We're a 501(c)(3) seeking funding to expand our research, polling, and educational work. Our 2026 C3 base budget is $3.87M. Roughly ~55% covers core staff (senior leadership, policy and research, communications, operations); ~25% sustains polling, research, events, and convenings; and ~20% covers legal & compliance, office space, travel, and operational infrastructure.
Daniel Colson (Executive Director)
Daniel founded and leads AIPI. AIPI's polling of American attitudes toward AI has been instrumental in getting policymakers to take AI risk more seriously. Previously, Daniel co-founded Reserve, a fintech company backed by Sam Altman and Peter Thiel.
Mark Beall, Jr. (President)
Mark leads strategy and external affairs. He previously served as Director for AI Policy at the Pentagon and advised seven U.S. Secretaries of Defense on national security policy, co-founded and served as CEO of an AI startup, directed AWS's strategic cloud initiatives for the defense sector, and co-authored a landmark State Department report on AI-related risks.
Peter Wildeford (Head of Policy)
Peter oversees our policy research agenda. He previously co-founded and served as Chief Strategy Officer at the Institute for AI Policy and Strategy (IAPS). Peter is a top forecaster, certified by top 1% performance across Metaculus forecasting competitions.
Daniel Eth (Director of Strategy)
Daniel oversees production of our educational materials. He researches the possibility of automated AI R&D leading to an intelligence explosion, and he recently co-authored a report on the subject. He previously performed AI governance research at Oxford. He holds a PhD from UCLA and BS and MS degrees from Stanford, all in Materials Science & Engineering.
Min Goodman-Cheng (Director of Communications)
Min leads our media and communications strategy.
As Director of Communications, Min manages AIPI’s press relations, social media, and other external communications. She comes to AIPI after five years on Capitol Hill, most recently as Communications Director for the Senate Banking and Housing Committee under Chairman Sherrod Brown. She also worked on higher education policy at the think tank Third Way.
Jeff Starr (Chief Operating Officer & Director of Development)
Jeff leads operations and fundraising. Previously, he co-founded Growth Accelerators, a tech go-to-market consultancy, and he founded, led, and sold a software company. He has over 20 years of experience across 01Click, SAP, i2, and McKinsey.
Philip Wieczorek(Associate)
Philip serves as an Associate at AIPI, supporting senior staff with policy research, legislative tracking, and day-to-day operations. He holds an MSc in International Social and Public Policy (Research) from LSE, where his dissertation analyzed AI automation exposure across U.S. labor markets, and an MA (Hons) in Economics and Politics from the University of Edinburgh.
AIPI focuses on informing policymakers and helping align political incentives — through media engagement that increases voter awareness and polling that demonstrates voter preferences to elected officials.
Causes: failure to close our funding gap, or a shift in the philanthropic landscape that reduces support for AI safety research and education.
Outcomes: If we fail to raise the necessary funds, we will need to curtail our research, polling, and educational work. AIPI is one of the few organizations producing rigorous survey research on AGI risk and delivering it directly to policymakers across the political spectrum. Curtailing our work would mean fewer data points in the policy conversation, less educational material reaching members of Congress and their staff, and a reduced public evidence base for understanding how Americans actually view AI risk. Industry-funded narratives would face less counterweight from independent, evidence-based work focused on catastrophic risks from advanced AI.
Over the past 12 months, AIPI has received approximately $2.3M in C3 funding from the Survival and Flourishing Fund (a $200K Speculation Grant and a $1.435M S-Process Grant), plus additional C3 contributions from individual donors. We spent $1.12M in C3 in 2025 and currently hold a C3 balance of approximately $1.8M. Our 2026 C3 base budget is $3.87M, so we are actively raising to close the gap.
There are no bids on this project.