You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Short version
We're building infrastructure to make advanced AI systems more governable in real operation. This project will create an early prototype focused on runtime oversight, pause and rollback controls, auditability, and trusted deployment for higher-risk AI systems.
Long version.
We're building a pat. pend. runtime governance and safety layer for advanced AI systems. The core idea is simple: current AI safety often focuses on model outputs, policy documents, or post hoc monitoring, while leaving a major gap around what happens when powerful systems are already live in real environments. This project will develop an early working prototype of a governance architecture designed to help keep advanced AI systems traceable, reviewable, controllable, and safer in operation. The initial prototype will focus on runtime oversight, policy-aware control, rollback and pause capabilities, auditability, and trusted deployment logic. The goal is not to build another chatbot or compliance dashboard, but a practical control layer for high-consequence AI use.
Over 12 months, the project will deliver a first integrated prototype of Lisa Intel’s core governance architecture. The first goal is to build a runtime governance layer that can observe, constrain, and document AI system behavior during operation. The second goal is to implement core control mechanisms, including pause, rollback to verified states, audit logging, and policy-aware intervention pathways. The third goal is to test this architecture in representative scenarios involving higher-risk AI workflows, so that the prototype demonstrates practical value rather than remaining at the level of concept. The work will be carried out through architecture design, prototype engineering, controlled testing, external technical review, and iterative refinement.
This funding would provide 12 months of working capital to build the first serious prototype. The largest share would support founder salary so the project can move forward full-time. The remainder would support software engineering help, cloud/compute, security testing, legal and patent support, and basic operating costs tied directly to development.
Budget (12 months, USD 200,000)
Founder salary: $96,000
Part-time engineering support: $42,000
Cloud, compute, APIs, and infrastructure: $18,000
Security testing and technical review: $12,000
Legal, patent, and compliance support: $18,000
Ops, software, admin, and contingency: $14,000
Founder: Pedro Bentancour Garin
I have an interdisciplinary background spanning engineering, political science, philosophy, and doctoral-level research in the humanities, with a long-term focus on power, governance, and control systems.
Due to my prescence on platforms like X abd LinkedIn I have made an important network in the AI governance and safety communities. I've participated at UN and ITU seminars and also been invited to meetings with staffs from the EU AI Office and US Congress Commissions. I'm often contacted by companies to give my view on the current development on governance and safety issues and solutions.
Previously, I founded Treehoo, an early sustainability-focused internet platform with users in 170+ countries, and was a finalist at the Globe Forum in Stockholm (2009) alongside companies such as Tesla.
My academic work has been supported by 15+ competitive research grants, including funding from the Royal Swedish Academy of Sciences, and involved research stays at institutions such as Oxford University, the Getty Center (LA), the University of Melbourne, and the Vatican.
I am currently supported by an experienced strategy and fundraising advisor.
The main risk is scope. AI governance at runtime is a difficult technical problem, and a first prototype may not achieve all intended control or safety features within one year. There is also risk that integration and testing take longer than expected. Even in that case, the project would still produce valuable outputs: a working prototype of the core governance layer, tested control primitives such as pause and rollback, clearer technical validation of what works and what does not, and a stronger foundation for future funding or partnerships. A partial success would still move the field forward by turning abstract AI governance into something more operational and testable.
The project has been developed founder-led to date, without external funding.