You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
We are building a scalable AI risk literacy platform designed to reduce harmful misuse, overreliance, and manipulation risks associated with increasingly capable AI systems.
Our focus is not general education, but targeted behavioral intervention—helping users recognize dangerous capabilities, avoid unsafe use patterns, and understand real-world risks.
Goals
Train 100,000+ users to recognize and avoid high-risk AI misuse patterns
Reduce overreliance on AI in high-stakes contexts (health, legal, financial)
Build a dataset of user misunderstandings and failure modes
Contribute to AI safety evaluations and alignment research
Dataset for AI safety evals
We will collect structured datasets including:
Common failure modes (hallucination reliance, unsafe delegation)
Prompt patterns leading to harmful outputs
User misinterpretation of model capabilities
Behavioral data on overreliance in simulated high-stakes scenarios
This dataset will be formatted for direct use in AI safety evaluations and red-teaming pipelines.”
“As frontier AI systems rapidly scale in capability and accessibility in 2026, misuse risks are increasing faster than public understanding. There is a narrow window to intervene before unsafe usage patterns become widespread.”
“We will make datasets and findings available to AI safety researchers, evaluation teams, and alignment organizations. We aim to collaborate with groups working on model evaluations, red-teaming, and safety benchmarks.”
we measurably change behavior that reduces risk”
“This project addresses a critical gap: while frontier AI capabilities are rapidly advancing, there is currently no large-scale infrastructure for measuring and reducing harmful user interaction patterns with these systems.”
Development of core platform for large-scale AI misuse prevention
Interactive simulation systems for high-risk AI scenarios
Adaptive AI feedback system to correct unsafe user behavior in real time
Development of structured modules on:
AI misuse patterns
Overreliance in high-stakes contexts
Manipulation and adversarial risks
Creation of interactive simulations replicating real-world failure cases
Continuous iteration based on observed user behavior
Distribution to high-risk and high-impact user segments
Partnerships with organizations, educators, and communities
Systems to capture structured data on:
User misunderstandings
Failure modes
Behavioral responses to AI systems
Hosting and infrastructure to support scalable data collection
Secure storage and processing of behavioral datasets
Tooling for analysis and integration with AI safety research workflows
Flexibility to respond to emerging AI risks or opportunities
Team & Track Record
Our team combines experience in building scalable educational systems, technical development, and AI-focused content with an emphasis on real-world usability.
Execution Track Record:
Built and deployed 100+ structured AI guides currently used by thousands of monthly users
Established organic distribution channels that allow us to reach real users interacting with AI systems at scale
Developed a high-performance, accessible platform optimized for clarity and rapid iteration
Unique Advantage:
Direct access to real user interaction data at scale, enabling us to capture how people actually misunderstand and misuse AI systems in practice
Ability to rapidly convert observed failure patterns into structured learning interventions and datasets
As a nonprofit, we are focused on open, transparent, and broadly accessible infrastructure aligned with reducing real-world risks from advanced AI systems.
Dataset of 100,000+ user interactions with AI systems highlighting misuse patterns
Taxonomy of common AI misunderstanding and failure modes
Open-source simulation modules for AI safety education
Public reports summarizing behavioral risk trends in AI usage
Failure to capture meaningful or high-signal misuse data
Insufficient behavioral change in user interaction with AI systems
Rapid evolution of AI capabilities outpacing curriculum relevance
Limited integration with existing AI safety research and evaluation pipelines
Focus on high-risk, high-signal scenarios (e.g. overreliance, unsafe delegation, manipulation)
Continuous iteration using real user interaction data
Rapid deployment of updated modules in response to frontier model changes
Structuring all collected data for compatibility with AI safety evaluation and research use
Even in failure, the project will produce:
A structured dataset of real-world AI misuse and misunderstanding patterns
Insights into how users interact with increasingly capable AI systems
Open-source tools and infrastructure that can be reused by AI safety researchers and educators
How much money have you raised in the last 12 months, and from where?
Approximately $10,000 in self-funded development and operations over the past 12 months.
This funding has supported initial platform development, content creation, and early user acquisition—demonstrating our ability to execute efficiently and build meaningful infrastructure with limited resources.