Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
DillanJC avatarDillanJC avatar
Dillan John Coghlan

@DillanJC

Dillan J. Coghlan is an independent AI safety researcher and creator of Mirrorfield, a geometric safety architecture that uncovered the Dark River phenomenon—high-stability safety manifolds with 20–60× reduced jitter (p < 10⁻¹⁵) along decision boundaries. Working from Sydney, Australia, he combines geometric deep learning, topological analysis, and multi-model human–AI collaboration to explore alignment approaches driven by geometric priors rather than scale alone.

$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

Dillan J. Coghlan is an independent AI safety researcher working at the intersection of geometric deep learning, topological dynamics, and alignment theory. He is the primary developer of Mirrorfield, a geometric safety architecture that discovered the Dark River phenomenon, regions along safety decision boundaries exhibiting 20–60× stability improvements (p < 10⁻¹⁵) while maintaining competitive classification accuracy. This work explores a novel route to AI safety driven by geometric priors rather than scale alone.

Dillan’s methodology centres on orchestrating multiple AI systems (including GPT, Claude, and Gemini) as structured reasoning partners, using documented protocols for human–AI collaboration to generate, test, and refine hypotheses. His research spans technical implementation, such as an H₄-inspired GNN architecture with 120-cell polytope connectivity, rigorous empirical validation, and safety principles grounded in relationship-based, non-extractive system design.

Based in Sydney, Australia, Dillan is currently working independently and seeking funding to transition into full-time AI safety research. He brings cross-domain pattern-recognition skills spanning physics, game theory, political philosophy, and AI architecture, with a commitment to keeping safety research open and accessible while resisting corporate capture, misuse, or weaponization.

Research areas: geometric AI safety; topological decision boundaries; human–AI collaboration methodologies; coherence dynamics in safety-critical systems.