You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
BiasClear is an open-source structural persuasion detection engine (AGPL-3.0) that identifies how text manufactures conclusions independent of whether individual claims are factually accurate. It addresses an unoccupied layer in the AI safety stack: every existing governance tool audits models. None audits the text those models produce.
The system implements Persistent Influence Theory (PIT), a hierarchical framework I developed and published as a peer-reviewed preprint (SSRN, February 2026; Zenodo DOI: 10.5281/zenodo.18676405). PIT models structural persuasion as a three-tier system — ideological persistence, cognitive reinforcement, and institutional amplification — synthesizing established work from Festinger, Kuhn, McCombs & Shaw, and Vosoughi et al. into a unified framework with five falsifiable hypotheses.
BiasClear's architecture uses a two-ring design. The frozen detection core is 1,541 lines of immutable, deterministic code encoding 34 structural patterns across legal, media, financial, and political domains. This is code, not weights — immune to prompt injection, drift, and hallucination. The governed learning layer discovers new distortion patterns via LLM-assisted analysis within constitutional boundaries: new patterns require five independent confirmations before activation and auto-deactivate above a 15% false positive threshold. All state transitions are logged to a SHA-256 hash-chained audit trail.
Goals over the funding period:
Cross-model validation — test detection patterns against Claude, GPT, Gemini, and Grok outputs to build a model-agnostic detection surface
Systematic validation of the frozen core against human-coded ground truth across diverse domains (current internal F1: 98.6%)
Empirical testing of PIT's falsifiable hypotheses H1 and H4 using controlled text corpora with varying structural persuasion densities
Expand domain-specific pattern overlays for regulatory compliance applications
With minimum funding ($5,000): API credits for cross-model validation across four LLM architectures, compute for benchmark corpus generation, and infrastructure costs for biasclear.com and PyPI package maintenance.
With full funding ($10,000): All of the above plus a modest research stipend enabling dedicated development time on the governed learning layer's pattern expansion pipeline, additional compute for hypothesis testing across multiple experimental conditions, and expanded domain coverage for regulatory compliance tooling ahead of the Colorado AI Act (SB 24-205) effective date in June 2026.
I am the sole developer and maintainer. Background includes 20+ years in operations management and retail banking, including managing nine-figure asset portfolios and building operational infrastructure across commercial and nonprofit sectors. This background in systems analysis, risk management, and process design directly informs BiasClear's architecture — the two-ring design with constitutional governance boundaries is an operational controls framework applied to detection engineering.
PIT is published on SSRN and Zenodo (February 2026) with five falsifiable hypotheses. BiasClear is pip-installable, deployed live at biasclear.com, and actively maintained with commits through February 2026. ORCID: 0009-0003-3068-3623.
I am transparent about what this project is not: I am not an academic ML researcher, and BiasClear does not have significant community traction yet. What it does have is a working implementation of a published theoretical framework addressing a real gap that no one else is building for in the open.
Most likely failure modes:
Detection patterns do not generalize across LLM architectures — structural persuasion may manifest differently in GPT vs Claude vs Gemini outputs, limiting the model-agnostic claim. Cross-model validation is specifically designed to surface this.
Published framework does not gain traction in the AI safety community without institutional affiliation or endorsement from established researchers. The work may remain technically sound but invisible.
Sole-developer risk — no team redundancy. If I become unavailable, the project stalls. The AGPL-3.0 license and public repository mitigate this partially.
Regulatory landscape shifts — the Colorado AI Act or EU AI Act implementation may evolve in ways that reduce demand for output-level compliance tooling.
If the project fails, the published PIT framework and open-source codebase remain available for others to build on. The theoretical contribution stands independent of the implementation.
$0 in external funding. The project is entirely self-funded. I currently pay for AI platform subscriptions (Claude, Gemini) out of pocket to support development. I have pending applications with Anthropic's Claude for Open Source program and External Researcher Access Program, and OpenAI's Codex Open Source Fund, with no decisions received yet.
There are no bids on this project.