@josephwecker
Independent researcher on agent architecture and AI safety. 33 years systems engineering; ex-CTO Angel Studios, founding-era Twitch. Agentic Systems Framework v0.1.0 on Zenodo; three NeurIPS 2026 submissions in review.
https://v2.io$0 in pending offers
Full-time on this work since August 2025; no institutional affiliation, no commercial AI interest. Career before that: original ML algorithm implementation at Exsig (online evolutionary random forests for high-frequency FX trading; Erlang/Elixir/C production stack), interim VP Engineering at Tyfoom, and ~12 years founding and running Samaritan Technologies (volunteer-coordination platform now used globally by disaster-response organizations, hospitals, and NGOs).
The work integrates control theory (Lyapunov stability, contraction analysis), causal inference (Pearl's hierarchy, identifiability), and information theory under a common formalism for adaptive, purposeful agents under uncertainty. Findings most directly relevant to AI safety so far: a Lyapunov-survival exploration drive that resolves the active-inference dark-room problem (empirically validated — greedy 0% / Lyapunov-bounded 100% survival); a Pearl-hierarchy structural ceiling on what pre-deployment sandbox evaluation can verify; a derivation that modular safety architectures fail under goal divergence. Each segment carries explicit epistemic-status tags; the [public FINDINGS catalog](https://github.com/v2-io/agentic-systems/blob/main/msc/FINDINGS-RANKED-DRAFT.md) currently lists 14 Tier-1 findings. TACL submission in review; Anthropic Fellowship in review; three ASF-derived NeurIPS 2026 submissions in review.