ActiveGrant
$3,000raised
$6,000funding goal

Project summary

This project aims to develop concrete proposals for "Live Governance" - new governance architectures that use AI to enable more responsive and contextually-aware public administration while maintaining coherence and consistency. Rather than forcing standardization, Live Governance leverages AI to allow local adaptation of rules and processes while preserving their underlying spirit and purpose.

What are this project's goals? How will you achieve them?
Primary goals:

  1. Develop detailed policy propsoals and/or proofs-of-concept for Live Governance tools. This could include:

    • Live Regulation - AI systems that provide context-sensitive regulatory guidance

    • Live Administration - Adaptive processes for government services and licensing

    • Live Democracy - Tools for incorporating local perspectives into legislation

    • Live Accountability - Enhanced systems for public access to government information

  2. Identify key technical requirements and implementation challenges

  3. Create clear explanatory materials to make Live Governance concepts accessible to policymakers and other stakeholders

These will be achieved through:

  • One day per week of dedicated research over 6 months.

  • Regular engagement with the High Actuation Spaces (HAS) community for feedback and refinement

  • Development of policy proposals and proof-of-concept proposals

  • Outreach to potential collaborators in government, legal tech, and policy

How will this funding be used?

The $6,000 grant would support one day per week of research work over 6 months, enabling focused development of Live Governance proposals and proofs-of-concept. A $3,000 grant would support an abbreviated research effort that would scope possible Live Governance tools without developing them into policy proposals or proofs-of-concept.

Team

The project part of the High Actuation Spaces research agenda led by Sahil Kulshrestha. It has emerged from engagement with a broader High Actuation Spaces community, and we anticpate this collaboration will continue. More information about the conceptual framemwork underlying this proposal can be found at the Live Theory Lesswrong Sequence.

What are the most likely causes and outcomes if this project fails?

Most likely causes of failure:

  1. Technical requirements exceed near-term AI capabilities

  2. Regulatory or privacy concerns limit implementation possibilities

  3. Unable to effectively communicate complex concepts to stakeholders

  4. Difficulty balancing local adaptability with systemic coherence

How much money have you raised in the last 12 months, and from where?

No funding received in the last 12 months. There is a pending grant application with the Future of Life Institute.

donated $2,980
blake_borgeson avatar

Blake Borgeson

4 days ago

Sahil recommended this work, and I'm excited by what I read in the grant application. Happy to support it, and hopeful it can generate footholds towards a more adaptable, flourishing, robust society.

donated $20
orpheus avatar

Orpheus Lummis

about 1 month ago

Progress at the intersection of Live Machinery and governance processes / public administration could have significant potential upsides.
I've met Murray in person, and I've been impressed by their insight on law, governance, history, collaborativeness, care, and potentialities of leveraging AI tech in this space.