You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
I am developing an equivalence-level mathematical theory of transformer-based models. The goal is to identify stable structure under scaling and reparameterization—moving beyond brittle, instance-specific explanations tied to particular weight tensors. If successful, this work would provide a more robust foundation for interpretability, evaluation, and principled reasoning about advanced AI systems.
The core goal is to determine whether trained transformer models admit a mathematically well-posed equivalence-level description that is stable under scaling. I am approaching this through a staged research strategy: (1) broad but bounded mathematical orientation to identify structurally aligned tools, (2) targeted specialization once a viable abstraction level is identified, and (3) continuous integration of mathematical concepts into the research problem as they are learned. Progress does not hinge on a single result; partial frameworks or obstruction results are valuable outcomes.
I am requesting funding to support full-time independent research for approximately 12 months. Funds would primarily cover living expenses, with a small portion allocated to compute, books, and research materials. The project does not require significant infrastructure; sustained time and focus are the main inputs.
I am an independent researcher with a background in computer science and mathematics. I have previously produced technical research on structured reasoning and inference-time control in large language models, including formal analyses and preprints. Over the past ~6 weeks, I have been working full-time on this project and have produced a substantial white paper formulating the equivalence-level theory question. I am the sole contributor to this project.
The main risk is that no non-degenerate equivalence-level description exists that meaningfully improves on existing perspectives. In that case, the project would still produce rigorous negative or obstruction results that clarify why certain abstraction levels fail. These outcomes would narrow the theoretical search space and inform future work, rather than constituting a total failure.
I have not raised funding for this project in the past 12 months. I have recently submitted applications to Emergent Ventures and the Long-Term Future Fund; no decisions have been made yet.