You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
TL;DR: India produces AI talent at age 16 (like 16-year-old Raul John Aju). Thousands more like him discover AI early, fall in love with it, and get funneled, by culture, curriculum, and career glamour, entirely toward building AI tools or content creation on AI. Not governing it. Not auditing it. Not making it safe. No one is making AI safety cool for Indian kids. Our AI Championship brings a high-energy, competitive format for 6th–8th graders that does two things at once: teaches AI safety foundational concepts, and makes AI safety careers feel as aspirational as that of AI builders. 10 workshops done. Our goal: 50 workshops, 10 cities in Phase 1. We're requesting $318,990 (detailed budget) to build the early funnel from which India's future BlueDot Impact fellows and independent AI safety researchers emerge, before the AI builder world claims them first.
India has 250 million school-going children. The ones who discover AI early (AI Prodigies like 16-year-old Raul John Aju) are captured fast, by startup culture, and "build the next big AI tool" glory, before they've ever heard the words alignment, governance, or x-risk.
The intervention window is 6th–8th grade, before the glamour of building AI tools sets in permanently. A motivated 13-year-old who grasps an alignment problem today, completes BlueDot Impact's course, and publishes a novel solution isn't a fairytale; that's a Neel Nanda discovered at middle school, not a decade from now. Less likely but not impossible, but worth every rupee of the attempt, based on our Phase 0 results mentioned below. And the cohort around that one student becomes the safety-first AI builders the world needs, as opposed to today's safety-later, profit-first default.
AI Championship aims to build that early funnel. Here's how it works (More details in the Project Goals section):
Step 1: The Introductory Workshop (the on-ramp)
Step 2: Daily Peer Learning Sessions or DPLS (the depth layer)
Step 3: The Championship (the spotlight)
We plan to execute this in four phases:
Phase 0: Proof of concept (completed)
10 workshops in schools across 2 Gujarat cities. Timing was tight; schools restricted DPLS to protect final exam prep time for students, but 9 in 10 students expressed interest in DPLS and the Championship, and 7 in 10 schools offered to carve out a dedicated 30-minute daily period in the new academic year. Strong enough signal to go wider
Phase 1: This grant
50 workshops across 10 Indian cities (not in schools, private venues due to summer vacations) + DPLS ( held online), timed primarily during summer vacations for maximum reach. Target: 1,500 middle schoolers. This is what we are asking Manifund to fund, not the full vision, just another data-gathering pilot (wider than Phase 0) that proves it across different parts of India
Phase 2: Contingent on Phase 1 outcomes + further funding
More workshops and DPLS embedded in schools. School → inter-school → city → state competitions. Target: 50,000 to 100,000 middle schoolers across India
Phase 3: Contingent on Phase 2 outcomes + further funding
State finalists compete at the National AI Championship. Plans to broadcast finals on OTT or TV. The moment "AI safety" becomes aspirational at scale in India
The goal of each step is to create the moment a 13-year-old in Chennai or Ahmedabad first realizes that AI has risks worth thinking about seriously, and then build enough conceptual depth that the students who find this exciting can step into BlueDot Impact courses without hitting a wall of complexity.
We aim to make AI safety cool for 6th-8th Grade students through these steps:
Step 1 — Introductory Workshop (the on-ramp)
Not a lecture, a 60-minute edutaining (educational + entertaining) session that introduces AI risk concepts and gives students the Championship trailer. We cover all students, then filter for the interested ones, because we're not just teaching a subject; we're making a cultural shift. Students leave knowing AI safety is a real field, that it's intellectually exciting, and that there's a competition they can win. From our 10-workshop pilot: 9 out of 10 students wanted to sign up for DPLS, even with final exams approaching
Step 2 — Daily Peer Learning Sessions / DPLS (the depth layer)
Interested students sign up for a 30-minute DPLS (run online or as a dedicated school period) modeled as a junior BlueDot Impact course built for active reasoning, not passive reading
Every session ends with two things:
an open AI risk problem to wrestle with before the next session (e.g., How do you encode human values into an AI system? What happens when AI systems pursue misaligned goals? Who controls AI when it becomes more capable than its designers?), with reasoning quality tracked over time
one concrete, age-appropriate responsible AI tip/activity (e.g., spot the hallucination). AI literacy is about judgment, not just tool use. This is where judgment gets built
Students don't just learn AI risk concepts (alignment, robustness, governance), they explain them to peers, which is where real conceptual clarity happens. And these ideas don't stay in the sessions. Remember the electricity-saving campaigns that ran through Indian schools in the 90s? Kids went home and changed their parents' behavior. These ideas land differently at 13 than they do at 23, a 13-year-old doesn't have years of "that's not my problem" conditioning to unlearn. They just act. We're building the same effect for AI safety: students who carry these ideas home, spark dinner-table conversations, and quietly shift how their families think about AI
Throughout DPLS, we introduce AI safety pathways (BlueDot Impact, fellowships, independent research) continuously, so students develop not just conceptual clarity but awareness of the roads available to them. Students who show strong interest aren't just pointed toward a pathway, we guide them through the application process until they're admitted. And for the ultimate nudge: when global leaders tell a 13-year-old that AI safety is the most important problem of our time, a pivot toward research doesn't feel like a sacrifice. It feels inevitable. (Already in talks with Yoshua Bengio and Geoffrey Hinton)
Step 3 — The Championship (the spotlight)
Students compete across five levels: School → Inter-school → City → State → National, representing their schools and communities. The motivational structure mirrors the Science Olympiad or Bournvita Quiz Contest: the same competitive pride, community representation, and genuine stakes, redirected toward AI safety. We aim to broadcast the National finals on OTT/TV. The moment "AI safety researcher" becomes as aspirational in India as "AI builder"
Worst-case math:
From 1,500 students across 50 workshops, Phase 0 suggests 90% sign up for DPLS: that's 1,350 students.
If only 1% pursue AI safety careers, that's ~14 students entering the pipeline.
Cost per Neel Nanda discovered at middle school: ~$12,363 (less than the $15,000 stipend offered to a single MATS fellow, and comparable to BlueDot Impact's per-student cost)
But here's what makes this different from BlueDot Impact or MATS: the other 99% aren't wasted. Every student who deepened their AI risk literacy without pursuing a safety career becomes a safety-first AI builder, as opposed to today's safety-later, profit-first default. That's a win-win even in the worst case
Our Theory of Change
Funding Goal: $318,990.36
Program expenses (AI workshops, travel, communications): $ 173,090.39
Staff (Course facilitators/interns, social media manager, video editor): $ 46,596
Overhead (Coworking space, Google Workspace, Gamified AI Risk Literacy Platform): $ 35,505.9
Minimum Funding: $275,614.81
Program expenses (AI workshops, travel, communications): $ 173,090.39
Staff (Course facilitators/interns, Directors will do social media mgmt & video editing): $ 26,400
Overhead (Coworking space, Google Workspace, Website with limited features instead of a Gamified AI Risk Literacy Platform): $ 21,001.46
"In my experience, I've rarely seen students this engaged in a topic outside of exams. 9 out of 10 signed up for the daily learning sessions on the spot. Aashka brought something we didn't know we were missing. We've had many AI programs come through. This is the first one where parents reached out to me directly, asking how their child could continue."
— Lata Narayan, Principal, Shreyas Foundation, Ahmedabad, Gujarat
"I didn't know you could look inside an AI and see what's actually happening in there. Aashka didi said there's a whole research field just for that, called interpretability. I went home and looked up interpretability. My mind was blown."
— Vihaan Vaghela, 7th Grade, Divine Child School, Mehsana, Gujarat
Aashka Patel, Director @ EduHelp, Founder @ On AIR with Aashka (AI Championship Lead)
Conducted 10 workshops in schools with 6th–8th graders, handling everything from research, design, and execution to school outreach, and the green light for Phase 1 (what this grant is for) of this championship came directly from that experience. Previously, my AI safety comics were adopted by the AI Education Network (founded by Katharina Koerner) in Bay Area K-12 schools, built independently, and trusted by educators. As a Founder @ On AIR with Aashka, I host a podcast with upcoming confirmed guest Yoshua Bengio, with past episodes with White House AI policy experts, US Senate advisors, environmental sustainability researchers & architects, AI Ethics Professors, etc. (Guest testimonials here). My role as a Bug Bounty Hunter at Anthropic keeps me grounded in the exact safety risks I teach students to reason about. I've translated AI safety across different audiences: corporate boardrooms (HSBC India, DataCamp, etc), 1,500+ Indian college students (AI Ki Adalat) & 2,000+ Indian mothers (Mummy Padhegi AI). In collaboration with ISO experts, I created "AI Nutrition Labels for Everyday Consumers" to make AI transparency accessible to non-technical audiences, and I am currently working with IASEAI (Stuart Russell, Amir Banifatemi) and the Bureau of Indian Standards to turn it into a formal standard. I have some close connections in the Indian TV and OTT space who can turn this championship vision into a reality
D Kadikar, Founder @ EduHelp (Executive Advisor)
With five years of on-the-ground presence with EduHelp in India's school and student ecosystem, with relationships across 200+ schools and 40+ colleges nationally, the same colleges from which we recruit and train facilitators to run workshops at scale. With nearly 40 years of entrepreneurial experience across India and the UK, he gives us immediate access to school networks and student communities that would otherwise take years to build and advises on the operational realities of working within India's education landscape at scale.
EduHelp, founded in 2021, is a non-profit that is the operational vehicle for this AI championship. Our mission has always been simple: quality education for deserving students. Today, that means expanding into AI education: building programs that empower students to shape a safer, more humane future. And within that, our most urgent application is clear: ensuring India's young minds discover AI safety early.
1. Low Redirection Rate & The "AI Builder" Inertia
Cause: The "AI builder" narrative in India is very strong. Even with successful workshops, students may default to high-prestige, traditional paths: building consumer wrapper-apps or becoming AI influencers, rather than pursuing AI safety, governance, or x-risks careers
Outcome: We fail to hit our "Early Funnel" targets towards the BlueDot Impact course. However, even in this scenario, we achieve a high-leverage secondary win: shaping the public opinion of an entire generation and their families, a strategy Yoshua Bengio has publicly called a potential game-changer for global coordination (source)
Mitigation:
Social Prestige: We are positioning the National AI Championship to make AI safety as culturally celebrated as the National Spelling Bee
Parental Alignment: Since Indian parents are key career gatekeepers, we translate AI Safety into a high-demand career ROI that resonates with them
Cause: Middle-schoolers might find high-level AI alignment and technical safety concepts too abstract or mathematically intimidating
Outcome: Students disengage from the championship, treating it as too complex or intimidating, leading to lower retention and a failure to build an AI safety career early funnel
Mitigation: We translate complex AI safety research into intuitive, jargon-free frameworks and sometimes bring in guest lecturers already known for making hard concepts accessible (think: the people who make the math behind ML feel easy). Our pilot data backs this: 65%+ increase in student confidence rates, creating a natural bridge to advanced programs like BlueDot Impact without the typical wall of technical intimidation
Zero external funding for this championship. It has been entirely self-funded through personal savings, which is exactly why every rupee in this budget has been scrutinized.
Currently applying to: EA Long-Term Future Fund
Why April matters: Our 10 school workshops were conducted just before final exams, hence schools didn't allow DPLS to protect students' study time. Summer vacations (May–June) are the window: no exam pressure, maximum availability, maximum impact. We need to raise the funds in April to execute in May.
Stuart Russell believes we still have a decent chance at guaranteeing AI safety. That window stays open only if enough of the right people work on it. Shunryu Suzuki said: "In the beginner's mind, there are many possibilities; in the expert's, there are few." India's middle-schoolers are those beginners. We intend to discover them for AI safety.
We do this because we believe in Vasudhaiva Kutumbakam: the whole world is one family. and don’t want to jeopardize the very existence of our family due to unsafe AI advancement 🌻
I will be very happy to be contacted and share more details about this AI Championship if anyone has any feedback or questions! Link to FAQs
Email: aashka@eduhelp.org.in
Any support would make a huge huge huge difference ❤️
Thanks a ton in advance,
Aashka Patel :)