You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
As AI agents move from research demos to production systems, there is no standard infrastructure for monitoring their runtime behavior or verifying trust between agents. ClevAgent is a real time monitoring platform that detects infinite loops, token waste, hallucination patterns, and unsafe behavior in autonomous AI agents, with automatic restart and alerting. Foragent is a companion trust layer that enables agents to verify each other's identity, capability, and safety compliance before interacting. Both products are live in production today. This grant will fund open sourcing key safety monitoring components so the broader AI safety community can build on them.
Goal 1: Open source ClevAgent's core agent safety monitoring library (loop detection, token anomaly detection, behavioral drift alerting) so any developer deploying AI agents can integrate runtime safety checks. Goal 2: Publish an open specification for agent to agent trust verification (the Foragent protocol), enabling interoperable safety compliance checks across different agent frameworks. Goal 3: Write and publish technical documentation and benchmarks showing how runtime monitoring catches failure modes that pre deployment testing misses. We will achieve these by extracting the safety critical components from our production systems into standalone open source packages with clear APIs, documentation, and example integrations.
At $5K (minimum): Open source the core loop detection and token anomaly monitoring modules. Publish basic documentation and a Python package. At $15K: All of the above plus open source the behavioral drift detection system, publish the Foragent trust protocol specification, and create integration guides for popular agent frameworks (LangChain, CrewAI, AutoGen). At $25K (full goal): All of the above plus build a comprehensive benchmark suite comparing runtime monitoring approaches, hire a part time technical writer for documentation, and cover cloud infrastructure costs for running public demo instances. Funds will primarily go toward: developer time for extracting, testing, and packaging open source components; cloud hosting for documentation sites and demo instances; and technical writing.
Solo founder: Sean Kwon. I built both ClevAgent and Foragent from scratch and have them running in production. My background combines finance (JP Morgan and Deutsche Bank in Seoul, derivatives brokerage), business (University of Michigan Ross MBA), and AI (MIT Professional Education certificates in LLM, Agentic AI, and AI Governance). I have hands on experience building and deploying multi agent AI systems, which is exactly how I identified the gap that ClevAgent and Foragent fill. The products exist and work today. This is not a research proposal; it is a request to open source production tested safety tools.
Most likely failure mode: insufficient adoption. If the open source tools do not gain traction in the developer community, the impact would be limited even if the code is high quality. Mitigation: we will actively promote through AI safety forums, LessWrong, EA Forum, and developer communities. We will also integrate with popular frameworks to lower the adoption barrier. Second failure mode: scope creep. Trying to open source everything at once could result in nothing being polished enough to use. Mitigation: the tiered funding structure ensures we deliver a focused, usable package even at the minimum funding level. The core monitoring library will ship first regardless.
$0 in cash funding raised. The company was incorporated in April 2026 and is entirely bootstrapped. We have received $1,000 in Azure cloud credits through Microsoft for Startups Founders Hub, with an additional $4,000 pending approval. We have a pending application to Emergent Ventures. This is our first grant application on Manifund.