You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Nexus Spine is a formal-methods-backed project focused on a neglected AI-agent safety problem: delegated authority can become unsafe under composition.
As AI agents begin acting for users, companies, vendors, and other agents, the key risk is not only identity or authentication. A local permission can appear valid at one step while becoming overbroad, stale, revoked, or unsafe after handoff, tool use, agent-to-agent delegation, or context shift.
Nexus Spine is aimed at the missing safety layer: verifiable authorization, revocation, and evidence infrastructure for high-liability AI actions.
The core question is:
Can we prove that an AI-agent action was authorized, within scope, current, revocable, and auditable at the time it occurred?
This grant would fund 8–10 weeks of work to convert existing private formal-methods substrate into public-safe artifacts: specs, synthetic failure cases, conformance tests, and evidence receipt examples.
The goal is to make unsafe delegated authority testable rather than rhetorical.
The project will produce:
1. A Delegated Authority Failure Lab: synthetic AI-agent workflows showing how authority can fail under handoff, revocation, tool invocation, or agent-to-agent delegation.
2. A public invariant specification covering STOP / revocation dominance, non-amplifying delegation, fail-closed behavior, freshness checks, and evidence receipts.
3. Conformance test examples showing how an agent workflow can pass or fail authorization-safety checks.
4. Evidence receipt examples showing who authorized what, under what scope, when, and with what proof.
5. A concise threat model explaining how unsafe delegation creates compliance, security, safety, and accountability failures.
6. A public technical note or repository suitable for review by AI-safety, formal-methods, security, and governance researchers.
The project will stay narrow. It does not claim to solve AI alignment or replace existing identity and policy systems. It focuses on one specific safety class: authorization under delegation, revocation, and evidence obligations.
I am requesting $25,000 for 8–10 weeks.
Budget:
- Founder runway: $18,000
- Compute, hosting, tools, and verification infrastructure: $3,000
- Public artifact preparation, diagrams, documentation, and review support: $2,000
- Contingency / legal-accounting / administrative costs: $2,000
This funding would let me focus full-time on compressing existing work into public, reviewable outputs.
If only the $15,000 minimum is reached, I will narrow the scope to the Delegated Authority Failure Lab, the public invariant specification, and 2–3 conformance examples. If the full $25,000 is reached, I will also produce evidence receipt examples, a fuller threat model, diagrams, and a more polished public technical note or repository.
I am Edmund Benson, an independent founder building Nexus Spine.
My background includes mortgage and financial-compliance experience, which gave me direct exposure to high-liability consent, authorization, audit, and evidence problems. Over the last several months, I have been building private formal-methods and verification artifacts for Nexus Spine, including work across Lean, Isabelle, TLA+, Apalache-style model checking, invariant tests, conformance thinking, and proof-oriented architecture.
The public positioning here is intentionally narrower than the underlying research. I am not asking reviewers to evaluate a giant governance platform. I am asking for short runway to package one concrete safety direction: verifiable authorization, revocation, and evidence receipts for delegated AI-agent workflows.
What are the most likely causes and outcomes if this project fails?
The main failure modes are:
1. The work remains too abstract for useful external review.
2. The failure cases are not compelling enough to show why existing IAM, policy, or logging systems are insufficient.
3. The public artifacts do not make the formal-methods substrate legible.
4. The project is too early for immediate adoption.
I am mitigating these risks by focusing on concrete synthetic workflows, small public specs, conformance examples, and evidence receipt prototypes rather than broad theory.
If the project fails, the likely outcome is still useful: a public negative result or partial artifact clarifying where formal authorization safety for AI agents is too immature, too hard to communicate, or not yet externally valued.
I have not raised external funding for Nexus Spine in the last 12 months.
I have recently submitted several grant applications that are pending or rejected, but I have not yet received funding. This Manifund proposal is intended as short bridge funding to convert existing private technical work into public-safe artifacts that can be evaluated by others.
There are no bids on this project.