Forethought’s work has been some of the biggest influence on my safety research work, raising several new areas of research I and some of my collaborators hadn’t previously considered or prioritized. Keep up the amazing work!
You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Forethought is a non-profit that researches how to navigate the transition to a world with superintelligent AI systems.
Our research to date has highlighted the risk of AI-enabled coups, analyzed the likelihood of an intelligence explosion, and made the case that AI means we need to urgently address many challenges besides alignment.
This research has already fed into AI companies’ plans, shaped 80,000 Hours’ top problem profile list, helped to catalyze a safety-focused VC fund, and has been viewed more than 100,000 times.
Additional funds will allow us to expand our research team and translate our ideas into policy change.
➡️ You can see more detail about the case for funding Forethought on our Fundraiser page
Forethought’s mission is to work towards humanity being able to successfully navigate the transition to a world with superintelligent AI.
We do this by
Researching neglected topics in AI strategy, such as how to avoid AI-enabled coups and how to reach great futures (not just avoid catastrophe).
Working to ensure that action-relevant insights from our research are implemented, by finding and empowering specific people to take them forward.
Our research topics in 2026 are likely to include: how to improve AI character (including what AI model specs should look like); follow-up work on AI-enabled coups, including further research on threat modelling and concrete recommendations for AI companies and governments; building on our work on Better Futures to get a clearer sense of next steps; exploring in what circumstances deals with AIs can be used to reduce the risk of misaligned AI takeover; and scoping out the potential for strategy and policy work in space governance.
Our impact function’s plans for 2026 include continuing follow-up work on AI-enabled coups, such as by developing policy proposals, technical agendas to audit for secret loyalties, and ensuring that infosecurity prioritizes systems integrity; exploring other areas such as space governance; and working to build the field of people thinking about how to wisely navigate rapid AI progress.
Marginal funding allows us to hire additional researchers, including senior researchers, which will significantly accelerate our research. It will also allow us to increase the budget available for our impact function, to conduct activities such as run fellowships and events to build up the research and policy fields around our ideas, which will make sure that our ideas lead to real changes in AI development.
Forethought is currently evaluating applicants for (Senior) Research Fellow positions. We have been impressed by the quality of applications. We expect to hire at least 3 researchers, though it may be more, depending in part on funding. Marginal funding also determines the extent to which we're able to support our broader intellectual field, for example with a visiting fellows program, grants or other support.
I'm a fan of Forethought. They've got great people and are asking the important questions. Basically a spiritual successor to the FHI and the OpenPhil worldview investigations team. If I wasn't at AI Futures Project I'd probably want to work at Forethought.
— Daniel Kokotajlo, author of AI 2027
I'm excited to recommend Forethought. They're tackling the crucial but neglected question of how to ensure transformative AI leads to genuinely good outcomes—not just human survival. Will, Tom, and Max have assembled an exceptional team that's already producing high-impact research and influencing how AI labs think about risks like AI-enabled coups. They represent exactly the kind of foundational thinking we need as we navigate the intelligence explosion.
— Zach Freitas-Groff, AI Programme Officer at Longview Philanthropy
I think Forethought is currently the best institution in the world doing full-throated, scope-sensitive macrostrategy. They've assembled a great team, and they've already made substantial contributions to the discourse (I'm especially excited about the AI-assisted coups report and the Better Futures series), and I think that the overall research agenda (e.g., work on the various grand challenges that the intelligence explosion could create) is quite promising. What's more, I think the general niche Forethought is filling is both highly neglected and crucially important – and in particular, important to our ability to notice ways in which the overall strategic and prioritization landscape may be different from what we thought, and to adjust accordingly. I think they're well worth funding, and I'm excited to see where their work goes from here.
— Joe Carlsmith, formerly Senior Advisor at Open Philanthropy, now Anthropic
Our leadership team includes Will MacAskill, Tom Davidson and Max Dalton, and you can see full details on our website.
In our first year, we've published over 20 research papers, including:
Preparing for the Intelligence Explosion, which makes the case that we might see a century of progress in a decade, sparking many challenges besides misalignment risk.
AI-enabled coups: How a small group could use AI to seize power, which makes the case for this new threat model, and sets out potential mitigations.
Will AI R&D Automation Cause a Software Intelligence Explosion?, which analyses the extent to which AI progress would accelerate once AI can automate AI research.
AI Tools for Existential Security, which argues that we should invest much more in accelerating AI that will improve epistemics and coordination, and reduce risks.
The “Better Futures” series, which argues that we should focus more on reaching a near-best future (rather than merely surviving), and sets out what that might imply.
So far we have mostly been focused on research, but we have been pleased by the impact our work has had so far.
80,000 Hours told us that our research was one of the main influences on their new problem ranking, and they link extensively to our research in their new AI-focused pages. In particular, Extreme power concentration is their #2 problem, and many of the “emerging challenges” they include were highlighted in this paper.
Will MacAskill’s interview on 80,000 Hours was their most popular episode ever at the time (it’s currently been viewed 140,000 times on YouTube). Tom Davidson also went on the show to discuss AI-enabled coups.
We are advising staff at frontier AI companies on mitigating some of the risks we’ve highlighted.
Forethought’s research led to the creation of the Safe Artificial Intelligence Fund, a VC firm founded by Geoff Ralston (Former President of Y Combinator).
Our researchers have contributed to the International AI Safety Report, a global assessment of advanced AI capabilities authored by 100+ representatives from 33 countries and intergovernmental bodies.
Our research on AI tools for existential security informed the Future of Life Foundation’s Fellowship call.
The most likely causes of failure include:
Mistakes in research and research prioritisation, such as missing crucial considerations, choosing the wrong frames for research, or prioritising wrongly. We work to mitigate this by getting external feedback, building a strong internal feedback culture, and a strong culture of prioritisation and focussing on what’s important, but it’s a hard problem.
Coming up with good ideas but failing to get them acted upon. Our impact function is aimed at avoiding this.
Failing to attract and retain the strongest talent for our work.
Being too slow to affect AGI development if it happens extremely soon
We’ve raised $5.2m in the past 12 months, from a combination of private donors and institutions. Forethought has been evaluated and endorsed by both Longview and by Coefficient Giving (formerly Open Philanthropy), who have recommended us to donors in their network.
Ethan Josean Perez
2 days ago
Forethought’s work has been some of the biggest influence on my safety research work, raising several new areas of research I and some of my collaborators hadn’t previously considered or prioritized. Keep up the amazing work!