You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
I maintain 10 open source tools for AI security, governance, and accountability. The core projects are: a cryptographic audit ledger for AI decisions (VAOL, 29 releases), an ML model supply chain attestation pipeline using Sigstore/SLSA (7 releases), and a prompt injection firewall covering the OWASP LLM Top 10 (4 releases). All Apache 2.0 or MIT, no commercial backing, solo maintainer. 85+ CI/CD workflows across the portfolio, 5 OpenSSF Best Practices badges at PASSING.
The gap we're filling: when AI safety mechanisms fail in production, there's no independent way to verify what happened. RLHF and output filtering are model-level interventions; we work at the infrastructure level, providing the audit trail and verification layer that makes AI system behavior accountable after the fact. VAOL creates DSSE-signed audit records with SHA-256 hash chains and RFC 6962 Merkle trees. Our supply chain tool extends Sigstore/SLSA to ML models so you can verify a model in production matches what passed safety evaluations. The firewall detects 16 attack categories that cause safety-trained models to behave unsafely. The goal is to get all three tools through third-party security audits and publish formal analyses of their cryptographic properties.
$80,000 total. Third-party security audit of VAOL's DSSE signing pipeline ($40K), adversarial testing of supply chain attestation against 12 model tampering techniques ($25K), and compute for benchmark infrastructure and CI/CD ($15K). All audit reports and adversarial evaluation results will be published openly.
Solo maintainer: Ogulcan Aydogan, software engineer based in London. I've shipped 29 releases on VAOL, 7 on supply chain attestation, 4 on the firewall. Experience in cryptographic signing (DSSE, Sigstore, C2PA), ML pipeline security, and production infrastructure. The portfolio has 9 NLnet proposals under review, 3 Sovereign Tech Fund applications, 3 OpenAI Cybersecurity grants submitted, and a Mozilla grant. All projects have OpenSSF Best Practices PASSING badges.
Main failure mode: the security audit reveals fundamental design flaws in VAOL's signing pipeline that can't be fixed without a full rewrite. This is unlikely given that we're using standard cryptographic primitives (DSSE, Ed25519, SHA-256), but it's possible the integration has subtle issues. If that happens, we'd publish the findings, which is still valuable. Second risk: adoption remains low because the tools are too complex for most teams to deploy. We're mitigating this with better documentation and Docker/Kubernetes deployment guides, but it's a real concern for solo-maintained projects.
$0 raised so far. These projects have no commercial backing and I've self-funded all development. I have 19 grant applications submitted or under review across NLnet, Sovereign Tech Fund, OpenAI, Mozilla, LTFF, OTF, and others, but none have disbursed yet. This Manifund listing is one of several parallel funding efforts.