You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
The Why Race Project is building a free, publicly accessible interactive decision tree that makes the case against racing to develop superintelligent AI, for researchers, policymakers, executives, and the general public. The tree maps major pro-racing arguments with rigorous counter-arguments, evolves via community feedback, and serves as a tool AI safety advocates can use directly, share with others in outreach, or draw arguments from for their own communication work.
Racing to develop superintelligent AI poses an existential risk to humanity, yet many people with direct influence over AI development (researchers, policymakers, executives) promote racing because they haven't encountered the full case or don't have a concrete response when they hear a pro-racing argument they find intuitive. The core thesis of this tree is that racing is in basically no one's best interest; if more people understood and internalized that, far fewer would support it.
The project addresses this through an interactive decision tree. Users engage with the core case in minutes, navigate directly to whichever pro-racing argument they personally find compelling, and encounter a targeted counter-argument immediately. The tree is built in two parts: a clear core argument (Section I, currently drafted), and an objection taxonomy covering the full range of pro-racing positions (Section II, currently scaffolded, with counter-arguments to be written during the funded period). A community feedback system allows readers to submit new arguments or improvements, letting the tree converge on the strongest version of each side's case over time. The tree also serves as open argument infrastructure other AI safety communicators can draw from for their own work.
Primarily developer compensation, accelerating dedicated work on the project. Remaining funds cover AI tool subscriptions, hosting, domain, and Manifund's fiscal sponsorship fee. Work continues regardless of funding level; funding accelerates rather than enables.
Solo developer with an Electrical Engineering background. Prior interactive decision tree by the same developer, the Islamic Case for Veganism (over 16,000 nodes, over 4,000 citations, drawing from the Quran, authentic hadith, classical fiqh, and peer-reviewed animal welfare science), demonstrates the methodology's ability to scale.
The most likely cause of failure is insufficient distribution. The tree gets built but not enough people with influence over AI development see it. A secondary risk is that the arguments don't persuade users to change their position. In either case, the tree would still exist as a free public resource and could gain traction over time. There are no financial obligations, contracts, or liabilities that would create risk if the project underperforms.
None. The project has been self-funded to date.