You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
The core problem
Current AI systems are fluent but unreliable. Large language models can blur facts, hypotheses, contradictions, and unsupported claims into one confident-sounding output. That proposal leakage makes it hard to build AI systems that can be inspected, trusted, or corrected when the stakes are high.
Golem Physics takes a different path.
It represents knowledge as a geometric lattice: every claim becomes a coordinate with explicit position, provenance, neighbors, support paths, tension state, and verification state. Claims can be verified, proposed, rejected, preserved in tension, researched further, or left silent.
Dream cycles continuously work on the lattice. They crystallize supported claims, preserve real contradictions as visible tensions, identify gaps and anomalies, recycle unresolved material, and route unsupported proposals away from speech.
The LLM is not Golem's source of truth. LLMs help with extraction, proposal, synthesis, and voice. The source of truth is lattice state: anchors, provenance, neighbors, tensions, support paths, time, and verification state decide what can be spoken.
This is verification-before-voice: disciplined, inspectable knowledge before fluent output.
Why this matters for agent safety
Golem governs what an AI system can say. Constraint Native is the bridge to what an AI agent can do.
Constraint Native is a local Agent Firewall / MCP Gateway that applies the same discipline at the action boundary: tool calls, files, shell commands, network paths, and MCP servers can be inspected, blocked, replayed, and packaged into signed proof paths. The immediate grant still centers on Golem Physics, but Constraint Native shows how verification-before-voice can extend into governed action before execution.
Current progress: April 2026
A functional prototype already exists and runs today. The latest public snapshot shows 12,725 verified lattice nodes across 43 domains, 215 immutable verified nodes, 5 nodes held in preserved tension, and multiple inspectable surfaces including Claim Studio, Evidence Cockpit, Lattice Graph, Silence Map, Dream Theatre, and runtime traces.
I have spent roughly three years developing the underlying theory. In December 2025, I finally cracked a working model. Since then I have been building almost nonstop. Golem Physics is the first working domain Golem; Constraint Dynamics is the research organization behind it.
What this grant enables
$20,000 funds roughly one year of focused founder runway in Thailand, where costs are low and the grant has unusual leverage. It also covers compute, AI tooling, hosting, documentation, and the evidence work needed to make Golem easier for reviewers to inspect.
With this support, I will focus on:
- scaling the lattice and dream cycles beyond the current local ceiling;
- refreshing public metrics, runtime traces, screenshots, and visualizations;
- producing a short reviewer walkthrough and clearer demos;
- producing a clear Golem-to-Constraint Native walkthrough showing how claim-state discipline before speech becomes governed action before execution;
- improving the benchmark plan for proposal leakage, abstention, provenance retention, contradiction preservation, and false crystallization;
- preparing a stronger funder-ready packet for follow-on grants and external critique.
$5,000 minimum funding keeps the project moving for the next 3-4 months and funds the first evidence refresh. The $20,000 goal gives the project a real shot at a full year of concentrated work.
Why now
The hardest step, moving from theory to a working geometric system, is already complete. The current bottleneck is capacity and legibility: more lattice work, clearer evidence, better demos, and stronger reviewer materials.
The lean build matters, but it is not the headline. So far I have built this solo on local 8GB Apple Silicon laptop-class infrastructure with a small AI subscription. That constraint helped keep the system local, inspectable, and disciplined. Now the next step is to make the working system easier for other people to evaluate.
Non-claims
This is not an AGI project, a consciousness claim, a completed benchmark suite, a completed external audit, production-ready agent infrastructure, or a perfect-containment claim.
It is an early-stage research effort exploring whether geometric, constraint-native architectures can improve epistemic discipline in AI systems.
Team
Matthew A. Cator, founder of Constraint Dynamics. I built the current system, public website, evidence materials, runtime traces, and theory surface myself. No external funding has been received for this work in the past 12 months.
Links
Main website: https://www.constraintdynamics.org/
Golem interface: https://www.constraintdynamics.org/golem
Evidence page: https://www.constraintdynamics.org/evidence
Constraint Native bridge: https://www.constraintdynamics.org/product
Runtime trace: https://www.constraintdynamics.org/assets/docs/golem-runtime-trace-2026-04-29.md