You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
I recently identified a new AI vulnerability class — Status Selection Against Function (SSAF) — a structural failure mode where a system’s selector consistently chooses the output with the highest status‑like signal (fluency, confidence, verbosity) rather than the output with the highest functional correctness.
This failure mode appears in:
multi‑model routing systems
mixture‑of‑experts
agentic pipelines
tool‑use frameworks
evaluator‑based alignment methods
SSAF is not covered by existing taxonomies and is not caught by current audits.
Characterize SSAF at production‑relevant scales using multi‑model simulations
Develop the SSAF Audit Toolkit (v1.0) for labs, auditors, and researchers
Generate a reproducible SSAF simulation dataset
Validate SSAF across architectures (CUDA, x86, ARM)
Produce a public technical report summarizing findings and mitigations
Build a dedicated research workstation (192GB VRAM, 96 CPU cores)
Run large‑scale SSAF simulations and exploit‑vector analyses
Benchmark mitigation strategies
Validate on ARM via Mac Studio
Release toolkit, dataset, and report within 90 days
$2,000 — Mac Studio M3 Ultra for ARM validation
$1,000 — High‑speed networking + infrastructure
No salary requested. 100% of funds go directly into research capability.
This is a solo‑investigator project led by Dustin James.
Track record:
Discovered and formally defined Status Selection Against Function (SSAF), a previously unrecognized AI vulnerability class.
Ran 500+ experiments validating SSAF using only 8GB laptops and a 2010 iMac.
Built a full formal model, experimental framework, and complete research paper.
Demonstrated the ability to execute complex, multi‑month research projects independently, with reproducible code and clear documentation.
Strong background in systems analysis, AI behavior evaluation, and governance‑layer failure modes.
This project simply provides the compute capacity needed to scale the work already proven on minimal hardware.
Most likely causes of failure:
Compute limitations prevent running the largest planned simulations.
Cross‑architecture validation (CUDA → ARM) takes longer than expected.
Toolkit (v1.0) ships later than the 90‑day target.
Most likely outcomes if failure occurs:
Partial characterization of SSAF rather than full production‑scale mapping.
Toolkit may be functional but less comprehensive.
Dataset may be smaller or missing some architecture comparisons.
Even in a partial‑success scenario, the project still produces useful artifacts: code, experiments, and a clearer understanding of SSAF dynamics.
$0. I have not raised any funding in the last 12 months. This is a new project with no prior grants.