You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Investing in this research is an investment in high-agency talent from an underrepresented region. By funding my participation in Algoverse 2026, you are not just supporting a technical audit of LLM reasoning; you are enabling a researcher who has already demonstrated national-level excellence to break through financial barriers. Your support grants you a front-row seat to the development of "digital neuropsychology" frameworks and contributes to a more diverse, global, and robust AI Safety ecosystem.
This research initiative focuses on Mechanistic Interpretability, specifically aiming to audit the internal reasoning circuits of Large Language Models. As an independent researcher from Argentina accepted into the Algoverse 2026 cohort, I will use Sparse Autoencoders (SAEs) to move beyond behavioral testing and identify the specific logic pathways that govern model outputs.
The primary goal is to treat LLMs as digital neuropsychology subjects, isolating features to understand the "how" behind their reasoning. I will achieve this by utilizing Sparse Autoencoders to decompose polysemantic neurons into interpretable features and then applying activation patching to verify these circuits.
On a personal level, this project serves as a definitive pivot into professional AI Safety research. My objective is to bridge the gap between cognitive science and technical alignment, establishing a rigorous methodology that I can later apply to more complex, frontier models. I aim to transition from an independent learner to a contributor who can provide high-level technical audits for the safety community. Also, I am planning to study a Cognitive Science Major in August 2027 in the US. This project can bridge an important interdisciplinary gap and also help me to be able to study in University.
The total cost of the Algoverse AI Research Program is $3,325 USD. I have already secured a partial scholarship, and the funding I am seeking here is intended to bridge the remaining gap. So, the total funding of $2,000 USD asked will be used exclusively to bridge the tuition gap for the Algoverse AI Research Program. This investment covers expert mentorship, access to specialized research curriculum, and integration into the global AI Safety community. I am requesting this support because, as a researcher based in Argentina, the international tuition costs represent a prohibitive barrier despite my commitment. This grant is not just for a course; it is the necessary bridge to place a researcher from a high-potential, underrepresented region into the core of global safety efforts.
Minimum Funding ($1,000): This amount is the absolute threshold to make my participation viable. While reaching this milestone would allow me to officially join the cohort, it would still represent a significant financial strain for me and my family. At this level, I will be fully committed to the research, but the remaining balance would remain a major limitation, requiring extreme personal effort to cover. Supporting this minimum ensures the project starts, but it still leaves the mission in a precarious financial position.
Full Funding ($2,000): Reaching this goal completely removes the financial barrier and the burden on my family. It allows me to transition from "survival mode" to a 100% focus on technical execution. With the full funding, I can dedicate all my energy to auditing LLM circuits and contributing to the AI Safety community, ensuring that my research is driven by curiosity and rigor rather than financial stress.
I am the lead researcher on this project, bringing a background of scientific rigor and discipline. I define myself as a high-agency individual who has consistently sought excellence in competitive environments, notably as a medalist in the National Chemistry Olympiad in Argentina, twice. This background in the hard sciences provided me with the experimental precision I now apply to AI Safety. I am currently supported by my recent acceptance into Algoverse AI, which provides the institutional framework and peer-review necessary for this research.
The most likely cause of failure is the inherent technical noise in high-dimensional models, which can make Sparse Autoencoder features difficult to interpret. If the initial methodology does not yield clear circuits, the outcome will be a detailed technical report on the limitations of current SAE architectures for logical auditing. However, the mentorship at Algoverse provides a robust safety net to pivot the research if initial results are inconclusive.
In the last 12 months, I have raised $0, as I never considered I would have the opportunity to be fundraised. However, today my personal objectives and research ideas are stronger than any limitation, and I will try anything to fulfill them. I am currently bootstrapping my research efforts with my own limited resources, demonstrating my absolute commitment to this path. Even in the face of financial uncertainty, I maintain a solid background of self-directed study and technical rigor that ensures this research will move forward regardless of the obstacles.
Direct Impact: You accelerate a critical safety audit on how LLMs reason, potentially identifying deceptive circuits before they scale.
Talent Cultivation: You help a high-potential researcher from Argentina transition into the US academic system (Cognitive Science 2027), ensuring that future AI Safety leaders come from diverse economic backgrounds.
Knowledge Sharing: I am committed to publishing my findings and methodologies openly, providing the community with new tools for mechanistic interpretability.