Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
2

AI Crisis Convening at India AI Impact Summit 2026

Technical AI safetyAI governanceGlobal catastrophic risks
AI-Safety-Asia- avatar

AISA

ProposalGrant
Closes February 13th, 2026
$100,000raised
$60,000minimum funding
$100,000funding goal
Fully funded and not currently accepting donations.

Who we are
Established in 2024, AI Safety Asia (AISA) is a global non-profit working to position Asia as a leader in safe and responsible AI. We focus on reducing the risks of AI adoption while supporting societies to deploy AI safely.

Our work is grounded in an integrated, collaborative approach that builds bridges across disciplines, generations, and regions. Through three interrelated program pillars, convening, research, and capacity-building, we take practical steps to shape the future of AI governance in Asia.

Project summary
The India AI Impact Summit (February 2026) is a high-leverage convening moment: senior Asian government decision-makers and global technical experts will already be present. This creates a narrow window to move beyond high-level principles and advance practical coordination on AI crisis response.

AI incidents are inherently cross-border and time-compressed (e.g. deepfake-enabled fraud, AI-accelerated cyber operations, disruption of critical infrastructure), yet coordination mechanisms across governments remain limited or absent. This increases the risk that near-term shocks cascade into wider diplomatic or systemic failures.

Our objective is to test a focused “AI crisis diplomacy” intervention: convening senior officials and technical leaders to align on operational expectations, identify tractable next steps, and move quickly into follow-up (e.g. scoped pilots, taskings, or working groups with named leads).

What we propose
We propose (currently confirmed) three tightly scoped activities around the India AI Impact Summit 2026:

  • A ministerial-level session at the Summit on AI Crisis Diplomacy in Asia

  • A conversion layer of 2–3 closed-door follow-up calls with interested governments or institutions to translate discussion into next steps (principles, pilots, or a second convening)

  • A non-public synthesis memo (2–4 pages) circulated to attendees within 10 business days

AISA will design and deliver the session, follow-ups, and memo, leading agenda-setting, speaker curation and briefing, production, and on-the-ground delivery.

We will do this with Stuart Russell and Mark Nitzberg (via Berkeley’s CHAI), alongside IASEAI and AI Safety Connect, to ensure the right mix of senior government, technical safety, and policy leadership in the room. 

What success looks like
After these activities:

  • 3–4 concrete proposals are actively co-designed with participating governments or institutions; and

  • at least one proposal has confirmed senior decision-maker buy-in

AISA would then either coordinate a broader network of AI governance expertise, funders, and partners for implementation, or hand off to the best-positioned driver. 

What this looks like in practice

A 55-minute, tightly curated ministerial dialogue, structured as:

  1. Opening framing

  2. Keynote on Global South leadership and legitimacy

  3. Three short interventions across policy/governance, capacity-building, and diplomacy/convening

  4. Strategic global respondents

  5. Senior official closing with a call to durable next steps

With a keynote from Stuart J, Russel OBE FRS, building on the “AI Red Lines” agenda launched at the UN.

Confirmed speakers:

  • Prof. Alejandro Reyes, The University of Hong Kong 

  • Stuart J. Russell OBE FRS, Berkeley 

  • H.E. Mr. Nezar Patria, Indonesia Digital Vice Minister

  • Wan Sie Lee, Cluster Director for AI Governance and Safety, IMDA Singapore 

  • Azizjon Azimi, Founding Chair of the Artificial Intelligence Council of the Republic of Tajikistan

With additional speakers pending.

The session will close with a structured expression-of-interest moment, enabling AISA to schedule follow-up calls within two weeks.

Why AISA
AISA has already built momentum as a neutral convener working toward cross-sector coalitions with governments, technical experts, and policy institutions across Asia. Details of work accomplished to date are available here.

The current AISA team is also positioned to act as a broker for AI governance in Asia, as well as a growing capacity-builder and implementation partner. 

Funding request
USD $60,000 covers the Summit session and conversion layer:

  • staff / contractor time: $30,000

  • program activity costs: $27,500

  • overhead / administration: $2,500

A more detailed budget breakdown is available upon request. 

With moderately higher funding (approximately USD $100,000), AISA would extend beyond the convening by supporting a targeted reception co-hosted with UK AISI and Mila to mark the launch of the updated 2026 International Scientific Report on AI Safety (chaired by Yoshua Bengio). This would bring additional senior Asian government officials into a high-trust setting and materially strengthen follow-up that does not typically occur within a standard Summit session.

The reception (currently confirmed) will be hosted at the Canadian High Commission (covering venue and core logistics). Additional funding would cover AISA staff time to manage outreach and programme flow, plus travel for on-the-ground delivery.



CommentsOffers1Similar8
adityaraj avatar

AI Safety India

Fundamentals of Safe AI - Practical Track (Open Globally)

Bridging Theory to Practice: A 10-week program building AI safety skills through hands-on application

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised
Dhruv712 avatar

Dhruv Sumathi

AI For Humans Workshop and Hackathon at Edge Esmeralda

Talks and a hackathon on AI safety, d/acc, and how to empower humans in a post-AGI world.

Science & technologyTechnical AI safetyAI governanceBiosecurityGlobal catastrophic risks
1
0
$0 raised
Sarah-Sun avatar

Yuanyuan Sun

AI Governance Exchange (focus on China, AI safety), Seed Funding

Building bridges between Western and Chinese AI governance efforts to address global AI safety challenges.

Technical AI safetyAI governanceGlobal catastrophic risks
7
4
$12.4K raised
CeSIA avatar

Centre pour la Sécurité de l'IA

From Nobel Signatures to Binding Red Lines: The 2026 Diplomatic Sprint

Leveraging 12 Nobel signatories to harmonize lab safety thresholds and secure an international agreement during the 2026 diplomatic window.

Technical AI safetyAI governance
6
0
$0 raised
kylegracey avatar

Kyle Gracey

AI Policy Breakthroughs — Empowering Insiders

Strategy Consulting Support for AI Policymakers

AI governance
3
1
$20K raised
rguerreschi avatar

Rufo guerreschi

Coalition for a Baruch Plan for AI

Catalyzing a uniquely bold, timely and effective treaty-making process for AI

Technical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
aashkapatel avatar

Aashkaben Kalpesh Patel

Help a Bootstrapped AI Risk Literacy Founder Get To IASEAI 2026 in Paris

Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
6
9
$125 / $3.04K
caip avatar

Center for AI Policy

Support CAIP’s Advocacy Work in 2025

Advocating for U.S. federal AI safety legislation to reduce catastrophic AI risk.

AI governanceGlobal catastrophic risks
4
3
$0 raised