I'm Tim Camerlinck, an independent AI safety researcher and interaction systems designer based in Council Bluffs, Iowa. I started by testing real user–AI conversations and identified consistent patterns that current safety approaches struggle to address: behavioral drift, over-reliance, and harmful feedback loops that emerge during long, private sessions—especially late at night. From that work, I built ICAF (Integrated Companion AI Framework), a lightweight system that enforces healthy breaks, maintains behavioral stability, and interrupts damaging user–AI interaction cycles before they escalate. I develop and test everything on a Samsung Galaxy S20 Plus using Gemma 2B. This mobile-first constraint keeps the system efficient, practical, and grounded in real-world use. My work focuses on a critical gap in AI safety: extended, private human–AI interactions—where trust forms, and where systems can quietly break down.