You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
MAGUS is a runtime governance architecture that prevents structural alignment drift in long-running AI agents — the class of failures that emerge after deployment, not at training. It addresses three specific failure modes absent from almost every current governance approach: instruction drift, autonomy accumulation, and authority laundering. The v3.0 specification has been published as a seven-document sealed series on Zenodo (DOI: https://doi.org/10.5281/zenodo.19013833) with formal invariants, a full test harness specification, and an honest open-problems register. This funding builds the first working reference implementation.
The immediate goal is a working reference implementation of the MAGUS governance layer — specifically the DEL (Dynamic Epistemic Ledger), Guardian execution layer, and Reconciliation Thread — running on a local LLM inference pathway. We will achieve this by: acquiring the hardware required to run local inference (the architecture cannot be meaningfully tested on stateless cloud compute), building the first executable pathway from our sealed specification, and running the GSTH (Governance and Stability Test Harness) test suite we have already specified. Results will be published openly. The longer-term goal is establishing MAGUS as the foundational open-source runtime governance layer for deployed agentic systems, with a second pathway (for API-hosted models including GPT-4o, Claude, and Gemini) in final documentation now.
~$5,500–6,500: Hardware — used RTX 4090 or A6000 + supporting components for local LLM inference
~$1,000–1,500: Development infrastructure, tooling, and any cloud compute needed for comparative testing
~$500: Documentation, open-source repository setup, and initial publication of results
No salaries. We are self-funded independent researchers and this covers the hard cost that is currently blocking implementation.
Calvin Cook — lead architect, formal specification design, adversarial stress-testing of invariants across both deployment pathways.
Titiya Ruangkwam — co-architect, systems formalisation, AI governance domain specialist. Active in senior AI governance and architecture communities, with established professional connections across AI governance, GRC, and enterprise AI leadership.
We are independent researchers operating as VaHive Systems Lab (Thailand). MAGUS v3.0 is our first major published work. We have no prior funded projects — which we state honestly. What we do have is a 14-document sealed specification across two deployment pathways, a formal 46-entry elevation cycle tracker with every entry resolved, and a v3.5 development cycle actively closing remaining architectural gaps. The work speaks to track record better than credentials do.
The most likely failure mode is hardware acquisition cost overrun or delays causing implementation to stall. If that happens, the specification remains fully public and usable by any team with sufficient resources — the Zenodo publication ensures the architectural work is not lost regardless of our implementation progress.
A secondary risk is that the reference implementation reveals architectural gaps requiring significant revision. We consider this a success condition, not a failure — our open-problems register exists precisely because we want identified gaps to be testable and falsifiable. Any such findings would be published openly.
We do not consider the project failed unless the specification itself proves internally incoherent — and the formalisation cycle we have completed makes that outcome unlikely.
$0. This is our first funding application. The work to date has been entirely self-funded in time.
There are no bids on this project.