You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Governance tools audit AI. COHESION audits humans.
I'm building the open certification standard and live measurement layer for whether humans are actually exercising judgment over AI systems or rubber-stamping them. Middleware between AI output and user interface. Every interaction emits behavioural telemetry that feeds a continuous seven-dimension Judgment Independence Score per operator. A domain-calibrated decay model predicts degradation trajectories. An Invisible Maintenance Protocol (calibration injections, recommendation withholding, unranked presentation) counters decay while preserving ecological validity.
This is the missing conformity baseline that EU AI Act Article 14 (enforces 2 August 2026, 106 days from today), the Colorado AI Act, SEC AI oversight guidance, and the White House National AI Framework all require but none defines.
Solo founder, 22, psychology background, full-time since 2026-04-11. The only reason this exists is that nobody with the credentials looked.
Three concrete 12-month goals.
1. Reference-baseline adoption. Get COHESION cited as a reference by at least one national supervisory authority, CEN/CENELEC working group, or AI safety institute. Achieved via a public comment period on spec v1, v2 publication incorporating external critique, and targeted outreach to Brussels and DC policy contacts (EU AI Office email sent 2026-04-14; CEPS Director of Research Andrea Renda LinkedIn-connected 2026-04-18; 2-page Brussels-register policy brief published 2026-04-18).
2. First longitudinal dataset. Instrument 3 to 5 regulated-industry pilot operators (healthcare, financial services, legal) and publish the first cross-industry dataset on AI-induced judgment decay. The existing scoring API already handles the telemetry. What's needed is a fractional Data Protection Officer for GDPR readiness, pilot-instrumentation engineering, and a standards-editor for the v2 specification.
3. Open reference implementation. Release Python and TypeScript SDKs plus a conformance test suite so any operator, vendor, or regulator can adopt the measurement layer without licensing fees. One month of contract senior engineering, open benchmark publication, public documentation portal.
Each goal produces a concrete artifact a regrantor can verify: a citation, a published dataset, a released SDK.
Marginal $5K increments (regrantor-friendly stepped use):
- First $5,000: Two months of founder runway at current survival-rate burn + Delaware C-Corp conversion via Stripe Atlas.
- $5,000 → $15,000: Three additional months of runway + Loom Pro + domain and email costs + one fractional patent-attorney consultation for the 2027-04-13 non-provisional filing deadline.
- $15,000 → $25,000: SDK publication work. Python and TypeScript clients, approximately one month of contract senior engineering.
- $25,000 → $50,000: Empirical decay-parameter validation across two public automation-bias datasets + initial open replication package.
- Above $50,000: Cross-lab convening in SF or Berlin + modest founder salary for dedicated open-standard stewardship.
I'm currently living at my parents' house, burning about $107/month (Claude Pro + Google Workspace). Every dollar of regrant funding extends the runway of an open standard that has no other commercial path, not the personal runway of a founder who can also take a salary.
Solo. No employees. No board. Cap table is 100% Peyton Flock.
Background: psychology and communications coursework (in progress, not a credentialed psychologist), years of self-study in cognitive interviewing, identity work, and behavioural science. Prior: ran a youth-mentoring nonprofit (The Flock), interned at Hutton Settlement residential care home in Spokane, studied abroad in Spain.
Track record over the last nine days (since going full-time on 2026-04-11):
- 31-claim provisional patent filed at USPTO, confirmation #1414, 2026-04-13
- 1,372-line open certification specification, 19 sections, published
- Production scoring API at api.cohesionauth.com: 13 endpoints, Cloudflare Workers + D1, SHA-256 hashed keys with pepper in Secrets Store, two-layer rate limiting (per-IP pre-auth + per-key post-auth), 90-day audit log, 24-month retention, GDPR by design
- SSRN preprint 6571519 (Judgment Decay framework, 19 cited works)
- 5-domain interactive demo at cohesionauth.com/demo, 28 scenarios, 53/53 unit tests, 36/36 E2E tests, Lighthouse 89/100/100/100
- 2-page Brussels-register policy brief, 14-slide Anthropic Ecosystem Fund deck, full YC application, Thiel Fellowship submission (2026-04-14)
- Delaware LLC filed, C-Corp conversion via Stripe Atlas in flight
- 22+ outreach emails to AI leaders, researchers, regulators
No engineer on payroll. Claude Code on Opus 4.7 at max effort functions as a parallel-workstream force multiplier. Fractional patent attorney, Data Protection Officer, and standards-editor are budgeted for the first funded month.
Four named failure modes, ranked by my subjective probability.
1. Solo founder burnout or execution failure (~25%). Fifteen-hour days and $3,400 of runway are not sustainable indefinitely. Mitigation: the specification is published openly and the patent is defensive, so if I fail the work remains in the public record and another team can carry it forward. This grant specifically reduces this risk by extending runway.
2. A large governance vendor (Credo AI, Holistic AI, Fiddler) retrofits human-oversight measurement as a feature (~20%). Mitigation: the provisional patent covers the invisible-intervention loop specifically (31 claims). The longitudinal dataset is a first-mover moat — the first 24 months of behavioural baseline cannot be synthesized retroactively. Defensive patent + open spec + dataset compounding is a defensible triangle, but if a vendor moves fast enough and the standard community fragments, the open-standard path loses.
3. The empirical science is wrong (~15%). The seven-dimension weighting and the specific α / β decay parameters may not survive empirical calibration across diverse operator populations. Mitigation: pilot data makes v1 falsifiable. I would rather publish a corrected v2 than defend an uncorrected v1.
4. Regulatory forcing function weakens (~5%). EU AI Act Article 14 enforcement could be delayed. Mitigation: Colorado AI Act, SEC AI oversight guidance, and the White House National AI Framework provide regulatory redundancy. The market is a four-force pincer, not a single forcing function.
Outcomes if the project fails: the human-oversight measurement layer gets privately enclosed by a governance vendor optimising for lock-in; judgment decay compounds invisibly at population scale; the first large-scale AI-oversight failure forces a panicked regulatory retrofit. The counterfactual cost of failure is significant.
$0 raised. No external investment, no grants received, no revenue. Cap table is 100% founder.
Personal spending to date (lifetime, self-funded out of about $4,000 personal savings): $200 WA LLC filing, $65 USPTO provisional patent filing, $50 domain registrations, ~$600 Claude Pro and Google Workspace over six months, $10 Cloudflare infrastructure. Total around $925.
Current cash on hand: approximately $3,400. Current monthly burn: approximately $107 (Claude Pro + Google Workspace). Current compensation to founder: $0 (living with parents in Spokane WA).
Applications in flight as of 2026-04-18:
- Thiel Fellowship: submitted 2026-04-14 (rolling admission).
- SFF 2026 Main Track + Speculation Grant: submitting before 2026-04-22 deadline.
- Emergent Ventures (Tyler Cowen): submitted 2026-04-18.
- Open Philanthropy capacity-building for risks from transformative AI: drafted, submitting week of 2026-04-20.
- LTFF (EA Funds): drafted, submitting week of 2026-04-20.
- Foresight Institute AI for Science & Safety Nodes: drafting, deadline 2026-04-30.
- Anthropic Ecosystem Fund: 14-slide deck drafted, sending week of 2026-04-20.
- Y Combinator S2026: application drafted.
Pre-seed round planned Q3 2026 targeting $250K–$1M from AI-safety-thesis investors.