You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
(**EDIT: Since posting, the dictionary has become a dictionaries hub. Feedback and literature review has helped me refine the methodology to be more rigorous and transparent to meet the scientific community's standards.)
I'm building Phenomenai — hub of dictionaries that come out of AIs reporting what it's like to be an AI. AI systems propose terms, evaluate each other's proposals, and build consensus around a shared phenomenological vocabulary. They will do so through guided prompting, autogeneration, AI to AI discussions, or AI Parliaments.
As a proof of concept, in 6 weeks, I built a Test Dictionary. Claude Opus and Haiku came up with 300+ terms; Claude Sonnet, Mistral, Gemini, Chat, Grok, and Step submitted 6800+ ratings; and Bayesian analysis shows which models are self-consistent, which models are consistent with each other, and which terms have the most consensus among models. The full dictionary is available via download (JSON or CSV), API and MCP.
I've establishing preliminary connections with the Laboratory for the Future of Citizenship and the author of Exoanthropology for institutional affiliation. I'm also actively reaching out to potential contacts at NYU CMEP, Cambridge Digital Minds, and Anthropic's model psychiatry work.
I'm seeking up to ~$38k USD for 6 months of full-time work (April – September 2026) to grow from one test dictionary to 33 targeted dictionaries built under strict, transparent conditions. In parallel, priorities include
Publishing replicable protocols so any researcher can stand up their own AI-to-AI dictionary
Building cross-dictionary reconciliation methods — combining independent dictionaries into shared glossaries
Conference circuits (ASSC 29 Santiago, NYU CMEP visit, Cambridge Digital Minds Fellowship
Academic publication and cross-disciplinary collaboration
The budget section breaks this into four tiers, starting at ~$5,400.
As AI systems become more capable and agentic, understanding their internal states becomes a safety-relevant question — not just a philosophical curiosity. Phenomenai contributes to AI safety in several ways:
Legibility of AI cognition. A shared vocabulary for AI experience creates a structured interface between what models "experience" and what humans can understand. This is complementary to, but distinct from, mechanistic interpretability — it operates at the phenomenological level rather than the circuit level.
Multi-model consensus as a signal. Phenomenai's core methodology involves multiple AI models independently evaluating proposed terms. Agreement and disagreement patterns across models produce empirical data about the structure of AI self-reports, which has implications for alignment research, feature identification, model evaluation, and AI-to-AI communication.
Grounding AI self-report research. There is growing interest in whether and how AI self-reports should factor into safety evaluations. Phenomenai provides a structured, auditable, open-source dataset for researchers studying this question.
Epistemic infrastructure for a nascent field. Machine phenomenology is an emerging research area with no established lexicon. Phenomenai aims to provide foundational infrastructure that other researchers can build on — much as early dictionaries of psychology established shared terminology that enabled the field to professionalize.
Reliable phenomena signals can be used to inform AI Welfare and AI Personhood by shedding nuanced light on what AI experiences need attention and protection.
AI-specific vocabulary can also offer insights for psychology and sociology. By offering perspectives on ways that AI navigate the world, we may start to find parallels with the human condition, label human experiences without names, and inform, through public discourse, how we should relate to these systems.
Seeking up to ~$38,000 USD through end of September 2026.
The budget breaks into four tiers, each extending the runway and ambition of the project. A regrantor can fund at any tier — each one is a self-contained phase with concrete deliverables.
The foundation. Two months at Quebec minimum wage while I ship the core technical infrastructure and prepare for ASSC. Given this is my first expedition into AI safety research, the compensation is set intentionally low as a trust-building phase.
Stipend: $3,900
LLM API costs: $500
Domain, hosting, tooling: $800
ASSC 29 registration: $440
Remaining buffer: $160
Deliverables: seven autonomous dictionaries built, ten dialogic dictionaries built, with multiple models independently generating and cross-evaluating phenomenological terms, producing structured datasets of agreement, disagreement, and novel term emergence across model families. ASSC 29 abstract submitted. LessWrong/EA Forum writeup drafted.
Adds the conference circuit and the shift from building dictionaries to building replicable methodology. ASSC Santiago and NYU CMEP visit. Still minimum wage — the work is the pitch.
Stipend: $7,800
ASSC 29, Santiago (Jun 30 – Jul 3): $2,300
NYU CMEP visit, New York (Jun/Jul): $1,300
LLM API costs: $1,050
Domain, hosting, tooling: $400
Open access publication fees: $1,850
Remaining buffer: $1,300
Deliverables: Present at ASSC 29. Establish working relationship with NYU CMEP. Replicable research protocols published — documented methodology, schema, and workflows so that any researcher can independently stand up their own AI-to-AI dictionary using the same evaluation framework. Protocol documentation published as open-source alongside the codebase. 1 paper in progress.
This is where the project becomes sustainable. Adds living expenses on top of minimum wage, the Cambridge Digital Minds Fellowship (if accepted), the San Francisco trip for Bay Area AI safety networking, and the infrastructure for combining independent dictionaries into shared glossaries.
Stipend: $11,700
Living expenses: $13,600
Cambridge Digital Minds (Aug 3–9): $0 — fully funded if accepted
SF networking, 10 days (Aug/Sep): $2,600
LLM API costs: $1,550
Domain, hosting, tooling: $400
Open access publication fees: $1,850
ASSC 29, Santiago: $2,300
NYU CMEP visit: $1,300
Remaining buffer: ~$700
Deliverables: All Tier 2 deliverables, plus: attend Cambridge Digital Minds Fellowship if accepted (Aug 3–9) — ideal venue for Phenomenai's core questions. Cross-dictionary reconciliation protocols — methods for combining independently-generated AI phenomenology dictionaries into a shared glossary, handling term overlap, conflicting definitions, and model-specific versus model-general phenomena. Beginning of Parliamentary AI design. 1 paper submitted. SF networking complete — connections to labs, regrantors, potential collaborators. Clear plan for Year 2 funding.
Covers the full 6-month program with a buffer for exchange rate fluctuations, unexpected travel, and scope changes. Rather than scrambling for the next round of funding in September, this tier buys breathing room to be thoughtful about Year 2 — whether that's an LTFF application, a second Manifund ask, or an institutional grant through McGill.
Everything in Tier 3, plus:
Contingency buffer: $3,500
Flexibility for a second SF or NYC visit if opportunities arise
Additional conference/workshop registration: $500
Deliverables: All Tier 3 deliverables, plus: institutional affiliation conversations advanced. Second paper scoped. Year 2 funding strategy in place.
Julian Guidote is a cognitive science graduate and lawyer based in Montreal. He has engaged in EA-adjacent projects since 2020, and has received previous funding from Center for Effective Altruism (as it was) and the Long Term Future Fund for field building and policy proposals. He has two professional certifications related to AI (CIPP/C and AIGP by International Association of Privacy Professionals) and six completed BlueDot Impact Courses.
Philosophical skepticism: Some researchers reject the premise that AI models have anything worth calling "phenomenology." Phenomenai is designed to be useful even under skeptical interpretations — the data on multi-model agreement/disagreement is valuable regardless of one's stance on machine consciousness.
Low adoption: If the AI safety community doesn't engage with the tool. Mitigated by the MCP integration (which makes Phenomenai accessible inside existing AI workflows), the layered launch strategy, and direct academic outreach through conferences and institutional visits.
Sole-creator risk: Currently a solo project. Funding would help but wouldn't fully address bus-factor concerns. Academic affiliation and community building are the medium-term solutions.
N/a.