CBPAI Manifund Project Description — REVISED (December 2025)
Project Summary
SHORT ANSWER
The Deal of the Century aims to persuade a critical mass of key influencers of Trump's AI policy—J.D. Vance, Sam Altman, Steve Bannon, Dario Amodei, Tulsi Gabbard, Joe Rogan, Tucker Carlson, Marco Rubio, Demis Hassabis, Pope Leo XIV, and others—to champion a timely US-China-led global AI treaty that prevents catastrophic AI risks while securing American leadership. With a 356-page Strategic Memo v2.6 published December 30th, 2025, insights from our completed October 2025 US Persuasion Tour, and an active Q1-Q2 2026 Roadmap, we're converting 77% US voters' support for a strong international AI treaty into decisive political action before an opening political window closes around the anticipated Trump-Xi summit in April 2026.
DETAILED ANSWER
We're at a fork in history. Leading US AI labs—OpenAI, xAI, Meta, and NVIDIA—openly aim for Artificial Superintelligence (ASI), AI that self-improves beyond human control. Most leading AI lab executives acknowledge the risks privately, assigning a 10-20% probability to human extinction, yet no coordinated response exists. Timelines have collapsed from decades to months.
History offers a precedent. On June 14, 1946—literally hours after Trump's birth—President Truman's advisors persuaded him to propose the Baruch Plan, history's boldest treaty for global nuclear control (coincidentally presented barely an hour after the birth of Donald Trump!). It nearly succeeded. Today we face similar stakes with a second chance to get it right.
A narrow but real window exists. Trump is expected to visit Xi around April 2026—Secretary Rubio has publicly confirmed "high probability" it happens this year. Trump's approval is at second-term lows; he needs a legacy-defining achievement. Meanwhile, three-quarters of Americans support international AI cooperation. A handful of trusted influencers—Vance, Altman, Bannon, Pope Leo XIV, and Gabbard—share overlapping humanist concerns and have the access to shift Trump's stance. If 3-4 unite around this vision, history can change.
Our approach is unique: We've discovered that most key influencers of Trump's AI policy share deeply-held humanist values and visions of "human-first AI." Our 356-page Strategic Memo provides deep psychological profiles of 14 influencers, tailored pitches for their values, and a novel high-bandwidth treaty-making process that avoids past failures. Crucially, our treaty design addresses both extinction risk AND authoritarian capture—the twin concerns that have paralyzed many in the AI safety community. We frame this as "peace through strength"—not globalism, but American-led advantage securing an unmatched presidential legacy while preserving democratic values.
We've already executed. Our team completed the October 2025 US Persuasion Tour across the Bay Area and Washington DC (see Achievements), engaging directly with AI lab headquarters and building strategic networks. Now we're planning our 2nd Persuasion Tour for Q1-Q2 2026, extending to Rome and Mar-a-Lago, timed for maximum impact before the April Trump-Xi window.
The race is between our persuasion timeline and ASI development timelines. Every week of delay exponentially increases irreversible risk.
What Are This Project's Goals? How Will You Achieve Them?
SHORT ANSWER
Our goal is to persuade at least 3-4 key Trump AI influencers to jointly pitch Trump privately for a US-led global AI treaty over the next few months, ahead of his anticipated April 2026 meeting with President Xi. We achieve this through: (1) direct delivery of tailored Strategic Memo materials to influencers via warm introducers during our 2nd Persuasion Tour (Q1-Q2 2026), (2) leveraging insights and relationships built during our 1st Persuasion Tour in October 2025, (3) strategic gatherings in the Bay Area, DC, Rome, and Mar-a-Lago to coordinate introducers, and (4) continuous refinement of our case based on feedback from 40+ multidisciplinary advisors.
DETAILED ANSWER
Primary Goal: Critical Mass of Influencers Get 3-4 key influencers to jointly pitch Trump on championing a global AI treaty before his China visit. Historical analysis shows Truman shifted on the Baruch Plan when Oppenheimer, Acheson and a few others presented a united front. Trump responds to consensus among trusted advisors when presented with a pragmatic case for American advantage.
Secondary Goal: Foster a Humanist AI Alliance Build convergence among influencers who share humanist values—whether rooted in Christian social teaching (Vance, Pope Leo XIV, Bannon) or techno-humanist concerns about AI safety (Altman, Amodei, Hassabis). Our Strategic Memo maps how these camps can unite against trans/post-humanist accelerationists comfortable with the ASI gamble. Notably, Anthropic's Head of Policy Jack Clark explicitly called for a Baruch Plan for AI in The Economist in 2023 and recently stated that "most paths to superintelligence end in a global government or human extinction"—the intellectual foundation for this approach already exists within the AI safety community.
Execution Strategy—Multi-Channel Engagement:
Direct Strategic Outreach: We've created influencer-specific Open Letters and Strategic Memo sections analyzing each target's public positions, anticipating objections, and making cases calibrated to their worldview. For Vance, we emphasize Catholic social teaching on technology and civilization-scale responsibility. For Altman, the technical feasibility of verification mechanisms and alignment with his 2023 Senate testimony supporting international cooperation. For Bannon, framing it as economic warfare prevention and American sovereignty protection.
Field Activation at AI Labs: During our October 2025 tour, we held 26+ meetings with AI lab policy staff in the Bay Area and 16+ researchers with direct access to DC policymakers. We hand-delivered Direct Letters to Altman, Amodei, and Hassabis at their headquarters. This creates internal pressure and identifies champions who can influence leadership.
Strategic Gatherings: Our Q1-Q2 2026 roadmap includes DC, the Bay Area, the Mar-a-Lago area, Vatican/Rome, New Delhi, and Singapore engagements—bringing together introducers, advisors, and supporters for operational coordination timed around the April 2026 political window.
Iterative Refinement: Strategic Memo v2.6 published December 30, 2025 (356 pages, up from 130 pages in v2.0). Version 3.0 with updated Open/Direct Letters ships at the end of January 2026. Each iteration sharpens arguments based on real conversations with introducers and influencers.
The Three-Stage Coalition Strategy:
Stage 1: Unite the Humanist Core — Primary targets: Vance, Pope Leo XIV, Bannon, Rubio, Gabbard. These influencers share philosophical alignment rooted in Christian or traditional humanist values.
Stage 2: Bridge to Techno-Humanists — Primary targets: Altman, Amodei, Hassabis, and Sacks. Bridging requires demonstrating that treaty architecture preserves innovation space while addressing their genuine safety concerns.
Stage 3: Persuade or Outmaneuver Trans/Post-Humanists — Primary targets: Thiel, Musk. If persuasion fails, the humanist alliance must build sufficient critical mass to outweigh their influence with Trump.
How Will This Funding Be Used?
SHORT ANSWER
$150-400k funds intensified operations through critical 2026 political windows: staff to leverage our 356-page Strategic Memo treasure trove (2-3 hires @ ~$120k), follow-up Persuasion Tours (travel to DC, Mar-a-Lago, Rome @ ~$30k), specialized consultants (treaty mechanisms, China policy, communications @ ~$25k), operational infrastructure (secure communications, legal/fiscal sponsor fees @ ~$15k), and strategic reserve for time-sensitive opportunities (~$10k). A bare minimum of $50,000, if contributed within 1-2 weeks, would extend our runway through the critical April 2026 window.
DETAILED ANSWER
Personnel ($120k+): Staff (2-3 hires): Leverage the treasure trove of analysis in our 356-page Strategic Memo to scale outreach 10x. Currently operating on ~$7,500/month burn rate with one partially-compensated lead organizer. Additional staff would enable systematic introducer cultivation, rapid response to emerging opportunities, and sustained engagement with the 40+ advisors and partners in our network.
Consultants and Specialized Expertise ($25k):
Treaty Mechanism Designer ($10k): Engage more deeply our world-class technical experts on verification systems and enforcement mechanisms. Critical for credible answers when influencers ask, "How would this actually work?"
China Policy Advisor ($8k): Deep expertise on Xi administration dynamics and US-China negotiation history. Critical for answering "Would Xi actually agree to this?" and preparing for the April 2026 summit window.
Strategic Communications ($7k): Messaging refinement for different influencer worldviews. Translating technical AI safety arguments into language that resonates with Bannon vs. Pope Leo XIV vs. tech executives.
Travel and Events ($30k):
2nd Persuasion Tour (Q1-Q2 2026) — $20k: Flights, lodging, ground transport, and venue rentals for DC, Bay Area, Mar-a-Lago area, and Vatican/Rome engagements. Timed for maximum impact before the April 2026 Trump-Xi window.
Follow-up Travel (Jan-June 2026) — $10k: Quick-response trips for emergent opportunities. When introducers identify narrow windows with influencers, we must move fast.
Operational Infrastructure ($15k):
Secure Communications & Digital Security ($3k): Encrypted channels for sensitive influencer conversations
Fiscal Sponsor/Legal Fees ($5k): Operating through established 501(c)(3) host organization
Website & Digital Presence ($2k): Hosting, security certificates, CRM for introducer tracking
Documentation & Research Tools ($3k): Raindrop bibliography (1,000+ sources), policy databases, transcription for strategy refinement
Office/Logistics ($2k): Temporary workspace during tours, printing materials
Strategic Reserve ($10k): Last-minute opportunities that can't wait. Examples: Introducer identifies a 48-hour window with a Vance staffer requiring immediate travel. Key AI lab announces capability jump requiring rapid Strategic Memo update. Pope Leo XIV schedules an unexpected US visit.
Budget Efficiency Notes: We already have $70,000 in prior funding (SFF $60k + Ryan Kidd $10k + small donations), spent on initial Memo development and the 1st Persuasion Tour. We're raising $150-400k to execute through the critical 2026 window before Trump's China engagement and ASI timelines compress further.
What we're NOT funding: No large staff. No fancy office. No broad public campaigns (we deliberately avoid "stealing thunder" from influencers who might champion this). This is operational spending for direct persuasion during a narrow window.
Who Is on Your Team? What's Your Track Record on Similar Projects?
SHORT ANSWER
Lead: Rufo Guerreschi, founder of the Trustless Computing Association with 15+ years in defense-grade IT security, international tech governance and policy.
Support: 40+ contributors, advisors, and team members, including current and former officials from the UN, NSA, WEF, Yale, Princeton, and leading AI safety experts.
Track record: Previous $60k SFF grant in February 2025 successfully produced the 356-page Strategic Memo v2.6, completed an October 2025 US Persuasion Tour with 26+ AI lab meetings and 16+ DC policymaker engagements, and established 50+ introducer relationships spanning Bay Area, DC, Vatican, and Mar-a-Lago networks.
DETAILED ANSWER
Core Team:
Rufo Guerreschi — Founder/Project Director
15+ years leading defense-grade IT security and trustless computing initiatives
Founded the Trustless Computing Association, focused on global AI governance and verification mechanisms
Deep networks across the AI safety community, national security establishment, Vatican AI ethics initiatives, and tech leadership
Spent 15 months (July 2024-Dec 2025) architecting The Deal of the Century strategy with extreme rigor
40+ Exceptional Advisors & Contributors. Our team and network spans:
UN & Multilateral Organizations: Experience in actual treaty negotiation and global governance mechanisms
US Intelligence Community: Former NSA officials with deep China expertise and AI assessment access
Elite Academia: Policy faculty from Yale, Princeton bringing geopolitical strategy frameworks
AI Safety Community: Leading researchers who understand both technical capabilities and risks
Religious Leadership: Vatican AI advisors (critical for Pope Leo XIV outreach)
World Economic Forum: Experience in public-private partnership models for technology governance
Track Record — What We've Already Delivered:
SFF Grant Performance: Received $60k in February 2025. Delivered:
356-page Strategic Memo v2.6 (published Dec 30, 2025) — Deep psychological profiles of 14 key influencers, detailed treaty mechanisms, historical analysis of Baruch Plan lessons
Influencer-specific Open Letters tailored to each target's values, anticipated objections, and decision-making psychology
50-minute video case for introducers explaining the full strategic logic
Network activation of 50+ warm introducers across the Bay Area, DC, Rome, and Mar-a-Lago
1st Persuasion Tour (October 2025) Results: See our Achievements page for details:
Bay Area: 26+ meetings with AI lab policy staff at OpenAI, Anthropic, and DeepMind. Direct Letters hand-delivered to Altman, Amodei, and Hassabis at their headquarters.
Washington DC: 16+ researchers with direct access to policymakers. Leveraged former intel community relationships for national security establishment conversations.
Network Building: Vatican, NSA, and Mar-a-Lago introducer relationships established and cultivated.
Why This Team Can Win:
Not academic theorists — we're operators with genuine relationships to influencers and introducers. Our advisors have negotiated actual treaties, briefed presidents, and built AI systems.
Not naive optimists — our Strategic Memo includes detailed failure modes, counterfactual analysis, and specific mechanisms addressing why past attempts failed. One SFF evaluator noted, "This proposal is going for the throat, which I like."
Proven execution velocity — conceived project in July 2024, secured $60k by February 2025, published 130-page Strategic Memo by September, completed US Persuasion Tour by October, and expanded to 356-page v2.6 by December. We move fast when windows are narrow.
Extreme capital efficiency — operating on ~$7,500/month burn rate, compared to typical DC policy shops at $50,000-$100,000/month. We're 10x more capital efficient because we're operator-led and leverage 1,500+ hours of volunteer and pro bono work.
What Are the Most Likely Causes and Outcomes If This Project Fails?
SHORT ANSWER
Failure modes: (1) Insufficient influencer coordination—we reach 1-2 influencers but lack critical mass to sway Trump, (2) timeline failure—influencers buy in but too late for Trump's April 2026 China engagement, (3) Xi non-reciprocation—Trump moves but China doesn't follow, (4) treaty creates authoritarian outcomes—a concern we address through distributed governance architecture, (5) alternative AI governance efforts—unilateral US approaches or weak international frameworks preempt bolder treaty options.
Outcomes if we fail: The US-China AI arms race accelerates, ASI development proceeds uncontrolled with a 10-90% extinction risk (per leading researchers) in 1-5 years, or authoritarian AI lock-in concentrates global power permanently.
But even partial success matters: Moving even 1-2 influencers toward treaty thinking shifts the Overton window and creates infrastructure for future attempts.
DETAILED ANSWER
Failure Mode #1: Insufficient Critical Mass Probability: 40%
We reach 1-2 influencers (e.g., Altman becomes privately supportive) but fail to activate the 3-4 needed for Trump to take it seriously. Trump responds to consensus among his trusted circle, not individual voices.
Mitigation: Our three-stage coalition strategy targets multiple influencer types (Christian humanists, techno-humanists, and media figures). If Altman doesn't bite, we pivot harder to the Bannon + Pope + Vance combination. Our Strategic Memo has tailored pitches for 14 different profiles precisely because we can't predict who will move first.
Failure Mode #2: Timeline Miss Probability: 35%
Influencers eventually buy in but after Trump's April 2026 China meeting concludes or ASI capabilities accelerate past the governable stage. AI labs are racing toward superintelligence with 2-4 year timelines; Microsoft AI CEO Mustafa Suleyman recently stated recursively self-improving AI could emerge within 3-5 years.
Mitigation: We're already executing. The 2nd Persuasion Tour begins in January 2026, timed for the April window. Pre-positioned introducers mean we're not building networks from scratch.
Failure Mode #3: China Non-Reciprocation Probability: 25%
Trump moves but Xi doesn't follow or demands terms unacceptable to the US. Our theory requires US-China co-leadership.
Mitigation: Our Strategic Memo includes detailed analysis of Xi's incentives—China's current AI lag, domestic pressures from automation, and vulnerability to being shut out of Western AI supply chains. We argue Xi has strong reasons to engage if approached properly. China has consistently called for global AI coordination. But this remains an uncertainty we can't fully control.
Failure Mode #4: Treaty Creates Authoritarian Outcomes Probability: 15-20%
A legitimate concern, particularly within the AI safety community, is that any treaty led by Trump and Xi could concentrate power in dangerous ways, creating authoritarian outcomes worse than uncontrolled ASI.
Mitigation: Our Strategic Memo dedicates an entire chapter to "A Treaty Enforcement that Prevents both ASI and Authoritarianism" (pages 130-139). Drawing on the Acheson-Lilienthal Report's original governance architecture, we propose distributed verification systems, exit rights for participating nations, and democratic oversight mechanisms that prevent capture by any single power. The humanist alliance we're building—anchored by figures like Vance, Pope Leo XIV, and Amodei who share concerns about power concentration—serves as an internal check against authoritarian drift.
Failure Mode #5: Alternative Approaches Preempt Probability: 20%
US pursues unilateral AI advantage strategy OR weak international frameworks get announced first and crowd out appetite for bolder approaches.
Mitigation: We frame this as complementary to American AI dominance. "Peace through strength" means leveraging the US lead to lock in advantages via a treaty. We position Trump's Genesis Mission as the linchpin—a treaty provides the international infrastructure that makes Genesis succeed.
What Happens If We Fail:
Most likely world: Uncontrolled ASI race — US and China sprint toward superintelligence without coordination. Leading AI lab executives (including Musk and Amodei) estimate a 10-20% extinction risk from unaligned ASI. Even "merely" catastrophic scenarios involve massive power concentration or authoritarian lock-in.
Why Even Partial Success Matters:
Overton window shift — Even if Trump doesn't fully embrace this, we're moving elite conversation from "AI regulation" to "global AI treaty"
Infrastructure for future attempts — The relationships, analysis, and mechanisms we're building don't expire
Internal AI lab pressure — Field engagement with engineers creates internal champions who may influence leadership
Risk-adjusted expected value: Even a 5-10% probability shift on preventing extinction-level risks justifies enormous effort. Reagan won the Cold War without firing a shot. Trump could win the Intelligence War in one deal.
How Much Money Have You Raised in the Last 12 Months, and from Where?
SHORT ANSWER
Total raised: ~$75,000. Primary funding: $60,000 Survival and Flourishing Fund (SFF) Speculation Grant in February 2025, founded by Jaan Tallinn. Additional: $10,000 from Ryan Kidd, plus several thousand from small individual donors and personal bridge funding to enable the 1st Persuasion Tour. We are pursuing $150-400k in additional funding through Manifund and other sources. Operated entirely volunteer-based until February 2025 (1,500+ hours); prior funding spent on Strategic Memo development, the 1st Persuasion Tour, and partial lead organizer compensation.
DETAILED ANSWER
Primary Funder: Survival and Flourishing Fund
$60,000 Speculation Grant — February 2025 SFF was founded by Jaan Tallinn (Skype co-founder, leading AI safety philanthropist). Focus on existential risk reduction and civilization-scale challenges. One SFF evaluator's feedback: "This proposal is going for the throat, which I like." SFF also supports organizations like the Future of Life Institute, AI Impacts, and other longtermist initiatives.
How the February 2025 SFF Grant Was Spent:
~$35,000 — Partial compensation for lead organizer Rufo Guerreschi's work (10 months)
~$14,000 — Consultant fees and venues for week-long writing retreats in Trevignano Romano, Italy, to draft Strategic Memo with contributors
~$6,000 — 1st US Persuasion Tour travel (flights, ground transport, Bay Area and DC)
~$5,000 — Overhead, digital services, secure communications, website hosting
Additional Funding:
$10,000 — Ryan Kidd (November 2025)
~$5,000 — Small individual donors and personal bridge funding to cover tour costs
Current Funding Status:
Operating on an ~$7,500/month burn rate. Current runway extends through Q1 2026 with tight margins. Seeking $150-400k to execute the 2nd Persuasion Tour and sustain operations through the critical April 2026 window.
Why Funding Now vs. Earlier: Operated volunteer-only until February 2025 because the strategic analysis required time to get right. Can't rush deep influencer psychological profiling, treaty mechanism design, and historical case analysis. The $60k enabled us to do that work properly. Now we're in execution phase—the 356-page Memo is done, relationships are activated, and we need sustained operations through the Q1-Q2 2026 window before Trump's China engagement.
Burn Rate Context: $150-400k over 12-18 months = ~$10-22k/month average. Extremely lean for this type of high-stakes advocacy. For comparison, typical DC policy shops spend $50,000-$100,000 per month. We're 5-10x more capital efficient because we're operator-led, not administrator-heavy, and leverage extensive pro bono advisor time and volunteer work.
How This Initiative Aligns with and Complements Coefficient Giving Network Programs
The Coefficient Giving Network has invested heavily in AI safety research, governance capacity-building, and policy advocacy—primarily through a California → Federal → Global regulatory pathway. Our initiative complements rather than competes with this approach by addressing a gap that institutional caution has left unfilled: direct engagement with the political figures who will actually shape global AI trajectory in the critical 2026 window.
Our Strategic Memo v2.6 includes a detailed 20-page chapter on Dario Amodei and the Coefficient Giving network (pages 221-244), analyzing why Anthropic and CG-aligned organizations have been hesitant to advocate for bold global treaties—and why that hesitation may now be counterproductive given political realities.
The core concern we address: Many in the CG network fear that a global AI treaty-making process led by figures like Trump and Xi could create authoritarian outcomes worse than the risks of ASI itself. Our Strategic Memo directly tackles this through detailed analysis of treaty enforcement mechanisms that prevent both ASI AND authoritarianism—drawing on lessons from the original Baruch Plan's failures and the Acheson-Lilienthal Report's governance architecture.
For more information: cbpai.org | Strategic Memo v2.6 | Achievements | Roadmap | Year-End Update