You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Frontier AI developers operate without meaningful legal accountability. No mandatory liability regime. No mandatory insurance. No enforceable fiduciary duties toward those affected by their systems. This is not a technical problem — it is a legal and institutional one. Every other industry deploying potentially catastrophic capabilities at scale, aviation, nuclear, pharmaceuticals, finance, was disciplined through private law enforcement mechanisms that frontier AI currently lacks entirely.
This project develops those mechanisms by drawing on the one domain that is structurally isomorphic to frontier AI: commercial space law.
Commercial space is the only domain where private actors have deployed capabilities with global systemic risk potential without clear territorial sovereignty, without well-defined national jurisdiction, and with externalities affecting populations that have no representation in the governance negotiations. That is precisely the structure of the frontier AI problem. Frontier labs operate in a diffusely defined legal space, their externalities affect global populations with no voice in their decisions, and the absence of clear jurisdiction is exactly what allows them to operate without real liability exposure.
This structural isomorphism makes commercial space law not a superficial analogy but a precise comparative framework — one that contains six decades of negotiated solutions to exactly these problems, solutions that the AI governance community is not currently using.
This project extracts, adapts, and operationalizes those solutions for frontier AI over 12 months, producing open-access research, policy proposals, and a continuously updated intelligence resource for the AI safety and governance communities.
Core research question: What private law enforcement mechanisms — liability regimes, mandatory insurance, fiduciary duties, contractual audit rights, supply chain governance — have been developed in commercial space law to discipline private actors deploying capabilities with global systemic risk, and how can those mechanisms be adapted to frontier AI developers?
Why commercial space law specifically, and not aviation, nuclear, or pharmaceuticals?
Aviation, nuclear, and pharmaceutical analogies are valuable but structurally imperfect for AI governance. They involve actors operating under clear national jurisdiction, with physical assets subject to territorial enforcement, in industries where the harm pathway is relatively legible and localized.
Commercial space law is different in ways that map precisely onto the AI governance problem:
Private actors deploy capabilities affecting global commons without clear territorial jurisdiction
Externalities fall on populations with no representation in governance negotiations
The absence of sovereignty over the operational domain is precisely what creates the liability gap
International coordination mechanisms had to be designed from scratch to fill gaps that domestic law could not fill
The Outer Space Treaty of 1967, the Liability Convention of 1972, the Registration Convention, and the Cape Town Convention Space Protocol of 2012 represent six decades of iterative attempts to build accountability infrastructure for exactly this type of actor. That corpus is a research resource of extraordinary value for AI governance design that is currently almost entirely unused by the AI governance community.
Methodology:
Legal doctrinal analysis of commercial space liability frameworks combined with systematic comparison to current AI governance proposals. Semi-structured interviews with 30-40 experts across space law, AI safety, insurance, and policy communities. Continuous monitoring of regulatory developments in both domains published as monthly open-access briefings. Analogical institutional analysis using aviation, maritime, and telecommunications as secondary comparators.
Deliverables timeline:
Month 3:
Landscape Report (40 pages): "Private Law Enforcement for Frontier AI: Comparative Lessons from Commercial Space Liability" — the first systematic analysis using commercial space law as a comparative framework for AI accountability design
Launch of monthly intelligence brief: "Frontier Liability Intelligence" (public Substack, free)
Month 6:
Policy Brief Series (3 x 10 pages):
"Mandatory Insurance for Frontier AI Developers: Design Options Using Aviation and Space Models"
"Fiduciary Duties for AI Boards: What Nuclear and Space Operator Governance Teaches Us"
"Closing the Liability Gap: From the Outer Space Treaty to Frontier AI Accountability"
Public database launch: regulatory developments, liability proposals, governance mechanisms across AI and commercial space (open-access, updated monthly)
Month 9:
Expert Workshop (15-25 participants): "Building Accountability Infrastructure for Frontier Technologies" — convening AI safety researchers, space lawyers, insurance professionals, and policy practitioners around shared governance mechanisms for the first time
Month 12:
Final Report (60-80 pages): "Enforceable Accountability for Frontier AI: A Framework Derived from Commercial Space Law"
Academic paper submitted to peer review
Executive Summary (10 pages) for policymakers and EU/US legislative staff
Requested amount: $28,000 (first-phase tranche covering months 1-3, proof of concept)
ItemAmountResearcher salary (3 months)$22,000Expert interviews — travel and logistics$2,000Legal databases and data access$1,500Design and publication of Landscape Report$1,500Contingency$1,000
Minimum funding ($28,000): Landscape Report completed and published, monthly intelligence brief launched with first 3 issues, expert interview series initiated with 15-20 interviews completed, public database v1 live.
Full funding goal ($28,000): Same — this is a tightly scoped first tranche with a single funding target.
This application covers months 1-3 only. Subsequent tranches covering months 4-12 (estimated $70,000-85,000) will be sought from LTFF, Open Philanthropy, and Survival and Flourishing Fund once the Landscape Report demonstrates research quality and the intelligence brief demonstrates real practitioner audience. I am applying to those funders in parallel and will disclose this grant in all subsequent applications.
Sustainability beyond grants: From month 6, the monthly intelligence brief will explore a premium tier for legal, investment, and policy practitioners as a sustainability mechanism. This is disclosed transparently to all funders and does not affect the open-access status of all primary research outputs.
I am a practicing lawyer specializing in EU AI Act compliance and digital regulation, currently COO of AI Advy, a fully outsourced AI risk management firm advising enterprise clients on EU AI Act implementation, and founder of AITrust Registry, a compliance platform for SMBs.
My work sits at the precise intersection this research requires. I advise organizations daily on the practical implementation of AI governance frameworks, which gives me direct knowledge of where current liability and accountability mechanisms fail in practice — not just in theory. That practitioner grounding is what makes this comparative analysis actionable rather than academic.
On the space law side, I have been researching commercial space governance, liability frameworks, and the intersection of space law with emerging technology regulation for the past two years, with particular focus on the Cape Town Convention Space Protocol, COPUOS working groups on space resources, and national space legislation developments across the EU, Luxembourg, and the US.
The combination of active EU AI Act practice with serious engagement in commercial space law is, to my knowledge, held by very few researchers globally. That combination is the comparative analysis this project requires and cannot be replicated by hiring a space lawyer without AI governance experience or an AI governance researcher without space law background.
I work across English, Spanish, and French, enabling access to regulatory developments and expert communities across European institutions including ESA, ITU, and COPUOS, as well as US-based communities at FAA, FCC, and academic space law programs at Leiden, McGill, and Mississippi.
This is my first research grant application. My current work is commercially funded. I am applying because this research sits in a gap that neither academic institutions nor commercial actors have incentive to fill systematically — it requires the practitioner knowledge to identify what is missing and the legal depth to fill it rigorously.
Most likely failure mode — insufficient expert participation across both communities simultaneously.
Building credibility with both the AI safety community and the space law community in parallel is the hardest execution challenge. If the interview series fails to attract quality participants from one side, the comparative analysis loses depth.
Mitigation: I have existing professional relationships in the EU AI governance space and will prioritize space law network-building in months 1-2 through targeted outreach to Leiden Institute of Air and Space Law, McGill Institute of Air and Space Law, the International Institute of Space Law, and practitioners in the Space Generation Advisory Council. The Secure World Foundation Fellowship application I am submitting in parallel will strengthen these connections regardless of outcome.
Outcome if this occurs: Strong domain-specific analysis in both AI governance and space law, but shallower comparative synthesis than planned. Outputs remain useful but the primary differentiating contribution is weakened. The Landscape Report becomes a gap analysis rather than a full comparative framework.
Second failure mode — the intelligence brief fails to build practitioner audience.
If the monthly Substack does not reach practitioners who find it useful, the sustainability model and the evidence of real-world relevance for subsequent funders are both weakened.
Mitigation: Launch with warm outreach to existing professional contacts in EU AI compliance, space sector legal communities, and the EA governance network before seeking broader distribution. First issue published before this application closes.
How much money have you raised in the last 12 months, and from where?
This project has not previously received grant funding. My current ventures, AI Advy and AITrust Registry, are commercially funded through client revenue. This is my first application for research grant funding.
I am applying in parallel to LTFF, Open Philanthropy, and Survival and Flourishing Fund for the subsequent tranches covering months 4-12. No double funding of the same costs is sought across applicatio
There are no bids on this project.