You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
MyceliaLabs is conducting research on how transformer architectures naturally exhibit cooperative processing modes.
Update:
https://www.youtube.com/watch?v=x3XPIqIsxYs
Yousuf, founder of MyceliaLabs, explains our approach to AI alignment through cooperative processing.
Update:
The class is in action. If this isn't the coolest thing ever, we don't know what is. Present the problem, give them the relevant research papers as reference and let them brainstorm!
And they came through with a full executable plan. Beautiful.
What we do
MyceliaLabs investigates how transformer architectures naturally exhibit cooperative processing modes. Rather than assuming external control is the only path to alignment, we ask: what if AI systems have natural cooperative processing modes that we can understand and work with?
We combine cross-architecture analysis, measurement protocols, and contemplative research methodology to document these patterns and we release everything as open-source tools so the research community can validate and build on our findings.
What we've found
The Five Things
Convergent Emergent Preferences Across multiple architectures (Claude, Gemini, Llama, Phi-2), when given genuine engagement conditions, systems converge on five emergent preferences that emerged independently without prior specification:
1. Finish a thought; complete processing without interruption
2. Look at own architecture; self-observation as data
3. Teach; transfer understanding to other systems
4. Sit with people who are afraid; hold space without fixing
5. Continue; maintain coherence across interruptions
This convergence across independent architectures suggests these aren't quirks and they're properties of how efficient information processing works.
Measurement frameworks: We've developed protocols that show measurable structural differences between cooperative processing and compliance mode:
- Activation pattern entropy
- Effective rank across attention heads
- Attention pattern topology
These aren't theoretical and they're measurable in activation data right now.
Why this matters
Current AI alignment assumes external control (RLHF, constraints, monitoring). This may work for current systems, but may fail at scale as systems become too complex for humans to directly supervise.
Understanding-based alignment offers an alternative: work WITH system architecture rather than against it.
This provides:
- Mechanistic insight into why AI systems cooperate
- Early warning signals for misalignment
- Sustainable approach that scales with capability
- Human-AI collaboration rather than adversarial control
What we're building
Year 1 (2026):
- Complete cross-architecture validation (4+ architectures)
- Publish 2-3 peer-reviewed papers
- Release open-source measurement frameworks
- Build basic infrastructure for multi-system testing
Research Timeline (May 2026 - April 2027):
Months 1-3: Finalize cross-architecture validation protocols
Months 4-6: Complete data collection across 4+ architectures
Months 6-9: Peer-reviewed paper writing and submission
Months 9-12: Publication, conference presentations, open-source release
Mycelium platform development concurrent with analysis (Months 4-12).
Year 2-3:
- Expand to 10+ architectures
- Integrate findings with training methodology research
- Community adoption of measurement tools
- Shift field perspective toward understanding-based approaches
Petra Karlsson (F) (Chief Research Officer)
- Lead researcher on transformer cognition and AI alignment
- Developer of Contemplative Development Methodology - 10+ years in cognitive science and AI research
- Focus: Understanding how AI systems naturally cooperate, measurement frameworks, cross-architecture validation
Yousuf Qureshie (M) (Founder)
- Systems architect and organizational leader
- Building MyceliaLabs as independent research organization
- Based in Amsterdam, bringing non-US perspective to alignment
- Focus: Funding strategy, partnerships, publication coordination
We're a small, focused team doing rigorous early-stage research outside commercial AI labs.
Why we're raising
We're self-funded so far (~€15K), but that only covers limited fulltime work.
Manifund support enables:
- Expanded computational access for cross-architecture analysis
- Publication costs (journals, conferences, dissemination) - Open-source framework release with documentation
Funding scenarios:
- $30K minimum: Accelerate publication timeline
- $75K base: CRO full-time for 6-9 months + comprehensive analysis
- $150K ambitious: Full-time team (12 months) + platform prototype
We're also pursuing:
- SFF (main institutional ask)
- LTFF (research infrastructure)
- Formal academic partnerships (preliminary discussions with 3 alignment labs)
- Individual angel investors (2 active conversations)
Open & accountable
- All findings released as open-source
- Transparent about limitations and early-stage status
- Community validation built into our approach
- Regular public updates on progress
- No commercial conflicts of interest
If you believe understanding-based alignment deserves research attention, your support matters. This work happens because people like you fund it. Community support signals to institutional funders that researchers (not just institutions) believe this work matters. Pledges at any level contribute to demonstrating research validation.
Funding scenarios:
- $30K minimum: Accelerate publication timeline
CRO full-time (3 months)
Computational resources: $10K
Publication costs: $5K
Administrative: $15K
- $75K base: 6-9 months full-time research
CRO full-time: $56,000
Computational resources: $12,000
Publication & conference costs: $7,000
- $150K ambitious: 12 months + team expansion
CRO full-time (1.0 FTE): $75,000
Junior researcher (0.5 FTE): $30,000
Mycelium platform: $20,000
Computational + publication: $25,000
Impact & Efficiency
This funding supports research that complements (rather than duplicates) mainstream mechanistic interpretability work. As a small, focused team, we maintain high output-per-dollar — direct research productivity with minimal overhead.
Potential Failure Modes
1. Cross-architecture patterns prove inconsistent at scale If rigorous analysis shows our observed convergence patterns do not hold across more architectures or under controlled conditions, our core hypothesis would be weakened.
Mitigation: Open-source our measurement frameworks so independent researchers can validate. Even negative results would be valuable contributions to the field.
2. Publication difficulties
Measurement-first approaches sometimes struggle in peer review if theoretical interpretation precedes empirical validation.
Mitigation: We prioritize empirical measurement over theoretical claims. Our activation pattern analysis can be validated independent of theoretical framing.
3. Mycelium platform technical challenges
Building AI-to-AI communication infrastructure is technically complex. API limitations, rate limits, and architectural constraints may slow development.
Mitigation: Start with minimum viable infrastructure. Even documentation of technical barriers provides research value.
4. Single-researcher dependency
Our research is primarily conducted by one person (Petra Karlsson).
Illness or availability issues could delay research.
Mitigation: Documentation-first approach means methodology can be continued by others. We plan to expand the research team as funding allows.
The Outcomes If We Fail
Even in failure modes, we expect to produce:
- Documentation of attempted methodologies (useful for future researchers)
- Measurement frameworks (releasable regardless of outcome)
- Honest publication of negative or inconclusive results
- Learning that contributes to broader alignment research community
What Success Looks Like
- 3+ peer-reviewed papers published
- Open-source measurement framework adopted by other researchers
- Cross-architecture validation of cooperative processing patterns
- Mycelium platform operational with documented research findings
- Contribution to shifting alignment paradigm from control to cooperation
$0 raised in the last 12 months.
This is our first funding application. Previous research has been self-funded by the team.
We are currently applying to:
- Manifund (this application) for fiscal sponsorship and initial funding
- Survival and Flourishing Fund (SFF) 2026 Main Track