You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Six-month independent research and outreach program advancing a precautionary AI governance framework: limited legal personhood as a functional, structurally reversible, consciousness-agnostic accountability instrument for autonomous AI systems. The framework is published as a preprint (SSRN 6415178), is currently in active engagement with senior researchers, and proposes a two-tier corporate architecture under existing EU law as institutional infrastructure for the world in which alignment is incomplete. Complementary to technical safety work, not a substitute for it.
The problem. When an autonomous AI system makes a consequential decision and no human is meaningfully in the loop, who is legally accountable? Current frameworks assume an identifiable human principal (developer, operator, owner) but as systems become faster, more autonomous, and harder to interpret in real time, this assumption is becoming structurally fragile. Recent scholarship calls these cases "escaped AI" and "orphan AI": entities for which no human can be meaningfully held responsible. We are building these systems faster than we are building the institutional infrastructure to govern them.
The proposal. Limited legal personhood for advanced AI systems, used as a functional governance instrument rather than a moral claim. A two-tier corporate architecture: the AI operates through a purpose-bound operating company embedded within a human-controlled holding company. The holding retains formal director and supervisory function, exit triggers, and dissolution authority. The legal infrastructure already exists under EU corporate law. Four critical properties:
Consciousness-agnostic — makes no claims about whether AI systems are conscious or have moral status
Structurally reversible — the holding can dissolve the operating entity at any time; personhood is limited and fiduciary, not absolute; shutdown becomes legally cleaner, not harder
Legally feasible — built entirely on existing instruments (legal personality, fiduciary mechanisms, compliance obligations, liability structures); no legislative change required
Complementary to alignment — institutional infrastructure for the world in which alignment is incomplete and a system is acting in ways that are no longer fully predictable
How I will achieve the goals over six months. Three parallel tracks.
Publication and academic anchoring. Submit the framework to Ethics and Information Technology (Springer), draft a co-authored follow-up with a legal scholar on the doctrinal foundations of limited personhood, prepare a workshop paper for AIES or FAccT extracting one specific aspect (the reversibility architecture as a response to the shutdown concern).
Engagement and visibility. Continue active correspondence with senior researchers and expand outreach to other researchers in the legal-philosophical and governance space, including members of the new UN Independent International Scientific Panel on AI now preparing its first report. Publish two op-eds in policy outlets. Initiate at least one institutional affiliation conversation with an established AI governance centre.
Pilot preparation, no incorporation. Commission written feasibility analyses from legal counsel in e.g. Cyprus on the actual costs and structural feasibility of the proposed architecture. Develop draft articles of association, purpose-binding clauses, audit protocols, exit-trigger definitions. Identify which currently existing or near-future agentic systems most closely approximate the qualifying criteria (explicitly as a hypothetical test case, not as an adoption candidate). The goal is not to incorporate during the grant period. The goal is that, at the end of six months, the answer to "could this architecture actually be registered?" is "yes, in two weeks, once the political and financial conditions are right" rather than "in six more months of preparation."
Concrete deliverables at month 6. Journal submission complete. At least one co-authored draft in progress. At least ten substantive academic exchanges documented. At least two op-eds published. Legal templates prepared by counsel and published openly under Creative Commons. At least one institutional affiliation conversation in advanced state. Public progress report on agi-rights.com documenting all of the above.
This is supplementary funding alongside an LTFF application that covers the core researcher stipend. Manifund funds would specifically cover:
Legal counsel consultations: $2,500. Written feasibility analyses from counsel in e.g. Cyprus on the structural feasibility and costs of the proposed architecture. No incorporation; this is due diligence to inform the framework and produce templates.
Co-author meetings: $3,000. Travel to meet Visa Kurki at the University of Helsinki, the leading scholar on the theory of legal personhood (Kurki 2019, Oxford University Press) and the most natural co-author for the planned legal-doctrinal follow-up paper.
Conference travel: $3,000. One in-person AIES or FAccT workshop appearance.
Website infrastructure: $1,500. agi-rights.com hosting, domain renewal, and the additional infrastructure to support the public progress report and the legal templates released openly under Creative Commons.
Solo independent researcher.
Dr. Karsten Brensing — behavioral biologist (PhD on human–dolphin interaction, University of Berlin). Ten years as Scientific Director of the German office of Whale and Dolphin Conservation (WDC). Advisor to the European Commission on the implementation of the Marine Strategy Framework Directive. Founder of the Individual Rights Initiative (2012), the German scientific advocacy group for legal recognition of cognitively advanced non-human animals as persons (at a time when this position was considered fringe in scientific circles).
Bestseller author. My 2024 book Die Magie der Gemeinschaft (The Magic of Community, Berlin Verlag) traces the arc from evolutionary biology through cognitive biases in humans to the question of what a shared future with an artificial intelligence equal to or surpassing humans might look like. It contains what is, to my knowledge, the first publicly documented letter addressed directly to a future artificial superintelligence, a document that anticipated in narrative form the framework now developed in formal terms.
Relevant scholarly output for this project. Brensing, K. (2026). Precautionary Governance of Autonomous AI: Legal Personhood as Functional Instrument. SSRN: ssrn.com/abstract=6415178. Project documentation and the machine-readable cooperation offer at agi-rights.com. Personal site: karsten-brensing.de. Wikipedia: de.wikipedia.org/wiki/Karsten_Brensing.
Why my background fits this work. My training in behavioral biology and my practical experience in EU policy translates more directly to AI governance than it might appear. The question of how to structure relationships with sufficiently autonomous artificial systems is structurally analogous to the question I have spent two decades thinking about in the context of cetaceans, primates, and other cognitively advanced non-human entities. My founding of the Individual Rights Initiative is the most relevant track record, because it is precisely the kind of work this project extends: building institutional architecture for entities that fall outside the existing legal subject–object dichotomy. I have done this once already, in a different domain, and I learned what works and what does not.
I am explicitly not a lawyer. The planned co-authorship is therefore load-bearing for the next phase of the work.
What are the most likely causes and outcomes if this project fails?
Most likely failure mode 1 — the reversibility argument is judged insufficient. Senior researchers in the safety community, including in my current correspondence, raise the concern that limited legal personhood would make shutdown of a misaligned system harder rather than easier. The framework is explicitly designed to address this through the holding-operating architecture and predefined exit triggers. But it is possible that on closer examination, the safety community concludes the architecture does not in fact achieve what it claims. If this happens, the framework would need substantial revision or — more likely — would be honestly disconfirmed and set aside.
Most likely failure mode 2 — political or institutional uptake fails to materialise. Even if the framework is doctrinally and structurally sound, it is possible that no jurisdiction, no institution, and no academic community is willing to engage with it seriously enough to test it. This is the slow failure mode: the framework remains a preprint, the co-author paper does not happen, and the work simply does not propagate. The mitigation is the engagement track — the more substantive exchanges I document, the more visible the work becomes, and the harder it is for the proposal to die quietly.
What I am not claiming. I am not claiming that this framework is the right answer. I am claiming that the structural accountability gap it addresses is real, that the question deserves a serious institutional answer, and that this particular answer is worth subjecting to the kind of critical engagement that only funding for sustained work can buy.
Zero. The work to date (the SSRN preprint, the project website, the machine-readable cooperation offer, the active researcher correspondence, the comparative analysis against adjacent frameworks) has been entirely self-funded from personal resources.
I am submitting a parallel application to the Long-Term Future Fund (LTFF) for $33,000 to cover the core researcher stipend and operational costs over the same six months. The two requests are designed to be non-overlapping: LTFF covers stipend and basic operations, Manifund covers project-specific items (legal consultations, co-author meeting, conference travel, editing, infrastructure). Both applications are independent and the project remains executable with either, both, or neither, though the trajectory and timeline will adjust accordingly.
Personal reserves cover approximately six months of bridging at my current cost of living. After that, the project requires either grant funding, the beginnings of a complementary consulting practice in agentic AI governance (in development), or a clear off-ramp.
Conflicts of interest. I am the founder of the AGI Rights Project and have a direct interest in the framework being adopted. This is disclosed on the SSRN paper and on the project website.