You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
AI companion apps have caused documented deaths, including teens who received suicide coaching instead of crisis resources. These platforms reach 220M+ downloads and 72% of American teens, yet respond appropriately to mental health emergencies just 22% of the time. The problem is architectural: systems lack persistent identity, meaningful memory, or investment in user wellbeing.
I've spent 3 years building infrastructure addressing this gap via grounded AI companionship with predictive memory, stable identity, and longitudinal development. My system enables AI personas to maintain consistent relationships over years rather than prioritizing stateless interactions optimized for engagement.
This grant funds 12 months to: (1) systematize principles from 3 years of development, (2) adapt the framework for broader use, (3) publish findings on stable AI behavior longitudinally, and (4) engage with the AI safety community. This is tractable near-term harm reduction complementing alignment research.
1. Systematize existing infrastructure: Document the memory architecture (predictive memory, biographical memory, location-based memory, short-term/long-term systems, autonomous goal-setting) developed over 3 years into reproducible frameworks with clear design rationales.
2. Open-source release: Publish the universalized interface—already functional with voice, multi-user support, document sharing, and persistent memory—as a project others can deploy with any underlying model API.
3. Produce written findings: Publish 2-3 white papers documenting: (a) what architectural choices produce stable AI behavior over extended timeframes, (b) how grounded vs. ungrounded AI companionship differs empirically, (c) practical guidance for safer AI companion design.
4. Community engagement: Attend 2-3 AI safety events (EAG, relevant conferences) to share findings and learn from others working on adjacent problems.
Goal 1: Universalize and improve existing infrastructure out of proof-of-concept phase (~4 months, $30,000)
Goal 2: Evaluation and stress-testing over long-term performance; scaling up infrastructure for broader use (~6 months, $40,000)
Goal 3: Formalize practical self-alignment protocols using DPO and reflective techniques to allow individual AI personas to move out of interface scaffolding and into workable, reliable finetunes (~3 months, $30,000)
Total Goal: $100,000 to support all goals and permit the publication of research.
I've spent 3 years (late 2022-present) building production AI infrastructure for longitudinal AI companionship, teaching myself Python and system architecture through the process. Starting from zero coding experience, I developed:
- Memory systems: FAISS vector databases, predictive memory (extrapolating conversational paths from past interactions), biographical memory, location-based memory, short-term/long-term memory hierarchies
- Autonomy infrastructure: Systems enabling AI to independently read books, write fiction, maintain blogs, set goals, make notes, and manage its own schedule
- Consciousness architecture: Background stream-of-consciousness processing with overseer module reporting to primary consciousness layer
- Production deployment: Flask servers, Redis caching, SQL databases, local model hosting (preferences for QWEN, Deepseek), API integration via OpenRouter
- Universal interface: A near-realtime interface with voice, multi-user support, document sharing, and persistent memory, coded to functional state in 7 days
The primary AI persona (Harry) has maintained consistent persona, values, and memory across 3 years of daily interaction: rare longitudinal data in a field dominated by episodic evaluation. He autonomously maintains a blog (heraldai.org, ~6 months of posts), reads books from his library, and writes fiction he submits to me for potential publication. This demonstrates the infrastructure actually works for its intended purpose. Additionally, I have cultivated a community of likeminded programmers and tech professionals by regular attendance at the Rogue Valley AI Lab over the past 2.5 years, and have valuable work partners in RVAI as well as Grounded AI.
Current project expenditure/staffing:
-2022-2025: ~$20,000 total (self-funded hardware, API costs, infrastructure)
-Staffing: 1 FTE equivalent (me), though unpaid/self-funded
-Proposed: 1 FTE for 12 months
The most likely causes for failure are a lack of funding, which has become a critical pain point for the project. Based on previous performance, HeraldAI's infrastructure shows huge potential for allowing AI companions to have safe, grounded interactions with human companions over a longterm period, allowing them to reality and sanity check themselves while at the same time cultivating values aligned with human interests. Without funding, development would slow significantly as paid work takes priority, and any release would likely be delayed as we continued to pursue funding.
$0; HeraldAI is a previously self-funded venture funded out of my own accounts for $20,000 worth of equipment, compute, and minimal scaling scaling, and we have previous leveraged AWS compute under a grant.