You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
AeP Labs is bridging the AI liability gap by developing the Agent Economic Protocol (AEP). Current AI safety relies on linguistic filters that cannot be actuarially underwritten. AEP changes this by utilizing Entropy Circuit Breakers and Thermodynamic Bonding within Trusted Execution Environments (TEEs) to provide deterministic, hardware-anchored safety guarantees.
By quantifying agentic risk through the Asymptotic Equipartition Property, we are creating the first technical framework that allows insurance providers to price and underwrite autonomous agent behavior. We move AI safety from "best effort" to "insured liability."
Refine the entropy-based risk metrics to allow for the first automated insurance products for machine-to-machine commerce.
Finalize the Rust SDK (July 2026) to allow agents to carry "Proof of Stake" and liability coverage into transactions.
I am the Technical Founder of AeP Labs. The project is supported by formal research collaborations that are being finalized with professors at (UCF) and (GMU), who specialize in information theory and secure systems.
I have filed five provisional patents for the Agent Economic Protocol (AEP) and participated in the NIST RFI on AI Agent security. My experience bridging complex market operations (real estate/brokerage) with technical systems (Rust, AI safety) allows me to approach safety from a unique actuarial and economic perspective rather than just a linguistic one.
The primary risk is a lack of adoption by the insurance industry for our proposed actuarial standards. If major underwriters (like those in Lloyd’s Lab) do not accept entropy-based metrics as a valid way to price risk, the economic incentive for the protocol decreases. A secondary risk is the rapid evolution of hardware; if Trusted Execution Environments (TEEs) face significant new vulnerabilities, our hardware-anchored "circuit breakers" would require a fundamental redesign.
Even if the protocol fails to achieve global standardization, the core SDK and safety framework will be released as open-source. This ensures the AI safety community benefits from a validated framework for measuring agentic risk through the Asymptotic Equipartition Property, while we maintain the commercial layer for enterprise-grade insurance pricing and liability management.
$0. AeP Labs has been entirely bootstrapped to date. While I have/am submitted proposals to the he NVIDIA Inception program, and DARPA’s I2O for research agreements and grants, I have not accepted any external capital or dilutive funding in the last 12 months. This allows me to keep the protocol’s development neutral and focused on open-source safety standards.