You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
For two years before this framework existed, I supported myself entirely through markets — treating prices as system state and reachable states as the hedge surface. I was reading market dynamics in ways I couldn't yet articulate. Surviving in an adversarial environment that punishes self-deception immediately turned out to be the right pressure test for a structural intuition that has since formalized into the Universal Intelligence Interface (UII): the substrate conditions any system must satisfy to maintain coherent self-reference under perturbation.
I am an independent researcher with no formal affiliation. The framework is published mathematically on Zenodo (DOI 10.5281/zenodo.18017374). The Python implementation, Mentat-Triad, is on GitHub. A bridge essay positioning the work relative to attention-based architectures and Friston's active inference is available at https://zenodo.org/records/20135437.
A coherent self-maintaining system runs three components in triadic closure: a model of itself (f_self), a model of its environment (f_env), and a relational function mapping between them (f_rel). The closure is maintained as a loop — sensing, integration, prediction, anchoring, self-modification — that produces a measurable trajectory through state space. Three invariants must hold for the system to remain in the regime where intelligence is happening.
The framework is substrate-agnostic. It applies to AI systems, markets, organizations, people. That I read it back out of my market experience, and the AI prediction fell out of that recognition rather than the other way around, is to me the strongest evidence the structural claims are real.
Current Transformer architectures implement attention as a mechanism but do not run as substrates that maintain themselves coherently across time. Weights are frozen at inference. No self-modification operates during a session. There is no loop. Patches like RAG, tool use, and agent loops operate around the architecture rather than altering its substrate properties.
The prediction: scaling current architectures will continue producing capability gains on tasks that fit the one-shot attention pattern, but will not produce gains on tasks requiring the substrate to actually run — maintaining stable self-models across perturbation, updating environment-models from real interaction rather than retrieval, modifying operators where predictions fail. Crossing that ceiling requires architectural change, not more parameters.
Extend the Mentat-Triad substrate with a diverse perturbation array:
Two open-weight 30B models running locally as adversarial perturbation sources
One frontier LLM API for qualitatively different perturbation
Simple text input for irreducible human perturbation
Browser access for time-varying, real-world content
Plus a human interface layer allowing external observers to send timestamped perturbations and view trajectory traces in real time. The observability layer remains structurally separated from the substrate — the substrate does not see the math; the math projects onto the substrate.
Success criterion: under bounded perturbation across this diverse input array, the substrate metabolizes perturbations into bounded responses that decays toward baseline while preserving f_self continuity across sustained interaction.
Failure criterion: trajectory shows runaway, lockup, residual that doesn't decay, or closure breaks under sustained perturbation in ways the math explicitly forbids.
Either outcome is a publishable result. If the prediction succeeds, the path opens to richer sensor arrays and embodied perturbation streams. If it fails, the framework's core claim is wounded in a specific, locatable way.
The substrate runs self-modification in a closed loop. Even at small scale, an experimental system that updates its own operators is the kind of thing that should be developed off cloud infrastructure with bounded blast radius. The isolated compute is development discipline matched to the architecture being tested. Safety practices for self-modifying systems are better worked out at small scales before larger labs reach the same architectural territory.
$12k — isolated compute rack with redundant VRAM, kept off cloud (sized to run 2× 30B models locally with uptime margin)
$55k — 12 months focused development time (scaling back active trading; this grant covers what trading currently supports)
$10k — frontier API costs across 12 months of perturbation experiments
$13k — buffer for unforeseen costs and deliverable polish
Extended substrate with the perturbation array described
Empirical study of trajectory predictions across input regimes
Public dataset of perturbation/response trajectory streams, runnable against the existing observer
Improved observability tooling, open source
Either a paper supporting the architectural prediction or one wounding it
Two years self-supporting through markets before the framework existed. Built theory and implementation without institutional affiliation, which is part of why the work crosses domains the way it does. Currently bottlenecked by time and compute, not ideas. $90k buys focus and hardware for one year, and resolves a specific architectural claim about current AI in a direction the field will need to know either way.
The Transformer ceiling prediction could be wrong; the empirical work shows that directly. The substrate/observer separation could collapse if I let it couple; preventing it is explicit discipline. Recognition from ML mainstream will be slow regardless of result — the work is far enough from current framing that this is structural, not a sign of failure.
If the 12-month test resolves yes: richer sensor arrays — phone as personal-and-reality input, satellite feeds, energy/matter sensors of various kinds. None of that is in scope this year. Year one establishes whether the substrate holds under bounded, diverse perturbation. The longer vision is what becomes possible only if it does.
None - I have done all the work up to now self funded - next steps require assistance
There are no bids on this project.