1

Survey for LLM Self-Knowledge and Coordination Practices

ActiveGrant
$10,000raised
$60,000funding goal

Project summary

  • A literature review of projects applying LLMs to self-knowledge and wise human coordination;

  • A survey of legacy practices and relevant academic literature;

  • Recommendations for high-leverage interventions;

As an arxiv paper.

What are this project's goals and how will you achieve them?

Aligning AI via an indirect normative approach requires eliciting and reconciling people's values and wills.

My goal is to help individuals and groups identify what is important to them at any given time and coordinate effectively with others, potentially with AI assistance, across various scales (individual, team, community, city, nation, etc.)

Methodology:

  • Survey of Practices:

    • Investigate both LLM-based and traditional practices for self-knowledge and coordination.

    • Identify the strengths and weaknesses of these practices.

  • Literature Review:

    • Analyze social choice and coordination literature to find common frameworks that underpin effective practices.

    • Review literature on the cognitive science of wisdom to understand which elements of these practices can be modeled by LLMs and which cannot.

  • Identification of Bottlenecks:

    • Locate points where information flow in social systems is hindered.

    • Propose high-leverage LLM interventions to alleviate these bottlenecks.

I will document my findings in an arxiv paper and would organize an unconference for relevant parties (contingent on interest, it would be a test of my coordination abilities).

My current non-exhaustive list of literature, practices, and LLM applications can be found here.

How will this funding be used?

Funding Usage:

  • Salary: $50,000 for a year of dedicated research.

  • Travel: $5,000 for conducting interviews and on-site research.

  • Compute: $5,000 for LLM tokens and other compute.

Who is on your team and what's your track record on similar projects?

Team and Track Record:

  • Team:

    • Myself, with informal advice from:

  • Track Record:

    • Developed tools for collective sensemaking (threadhelper, Unigraph).

      Created applications for collective intelligence at Borg (people search using social media big data, e.g. hive.one).

      Worked with METR on LLM agent evaluations and metrics for dangerous capabilities.

    • Have organized and co-organized 4 unconferences from 30 to 120 attendees.

    • Currently hosting a pop-up campus for projects related to this grant proposal, facing object-level coordination problems every day, grounding my mapping work.

What are the most likely causes and outcomes if this project fails? (premortem)

  • Potential Failures:

    • Next generation multi-modal LLMs may expand the scope of the possible so significantly that specific interventions or product designs may be obsolete.

  • Mitigation Strategies:

    • The survey and bottleneck mapping work will be useful and still inform use cases.

francisco avatar

Francisco Carvalho

3 months ago

Extremely grateful to Beth for her confidence, and thank you Austin!

Austin avatar

Austin Chen

4 months ago

Approving this project under our portfolio of AI safety research. I'm impressed by the list of advisors (and appreciate that Beth is personally choosing to fund this!). I also think the pop-up campus is pretty cool, am very in favor of more events like that. Best of luck with the research!