You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Update (Feb 2026): SSC v1.1 Draft Published
SSC v1.1 is now public:
Canonical (versioned): https://github.com/repozilla2/sentinel-proxy/tree/main/docs/ssc
Review (site hub): https://invariantgovernor.com/#ssc
This milestone clarifies scope (TRL-4/5, evidence-scoped, not certification) and defines caps, modes, stop behavior, and required evidence artifacts.
What this is (2026 focus): infrastructure, not hardware sales
Standard: SSC (Safety Specification Contract)
Category: Actuation Clamp (hardware safety interposer at the actuator boundary)
Reference implementation: Obex (v1)
Legacy naming: “Sentinel” appears in the repo slug / fundraiser title during the transition.
Minimum funding ($15k) ships: SSC v1.1 + Conformance Harness v1 + Evidence Packs (schema + verifier).
This fundraiser funds a standard + proof workflow:
SSC v1.1 (Safety Specification Contract): units, modes (Teach/Field/Maintenance), stop behavior, required evidence fields
Conformance Harness v1: repeatable tests (including malformed traffic + fuzzing / anti-wedge robustness)
Evidence Packs + Verifier: machine-readable logs + latency distributions (P50/P95/P99) with integrity tooling
Status: TRL-4 bench prototype (validated). Target: TRL-5 partner reproduction.
Claim boundary: not a safety-rated device and not robot certification. Claims are test-envelope scoped and backed by published evidence packs.
Not in scope: certification claims, industrial cobot demos, proprietary robot arm (gate-locked behind later evidence gates).
Open source: Apache-2.0 (software) • CERN-OHL-P-2.0 (hardware).
Sentinel: SSC — Reproducible Actuator-Boundary Standard
Open infrastructure for actuator-boundary enforcement: SSC v1.1, a conformance harness, and public evidence packs (Obex reference implementation).
What this fundraiser funds (deliverables you can independently verify)
This raise funds the transition from a working bench prototype to a reproducible, third-party-checkable proof workflow:
SSC v1.1 -> Conformance Harness v1 -> Evidence Packs.
Public deliverables:
- SSC v1.1: units/semantics, modes (Teach/Field/Maintenance), stop behavior, required evidence fields
- Conformance Harness v1: enforcement tests + malformed traffic handling + fuzzing/anti-wedge robustness
- Evidence Packs + Verifier: machine-readable logs + latency distributions (P50/P95/P99) with integrity tooling
How to verify we shipped:
- Public repo release tags for SSC v1.1, Conformance Harness v1, and evidence tooling
- Downloadable Evidence Pack (EP-YYYYMMDD-###) with config + firmware build ID, trial counts, enforcement outcomes, wedge count, latency distributions, and hash-chained log + verifier output
- One-command harness run that generates an evidence pack from a defined profile
Links: invariantgovernor.com | https://github.com/repozilla2/sentinel-proxy | youtube.com/watch?v=bjI_DN_1DXA
Licenses: Apache-2.0 (software) • CERN-OHL-P-2.0 (hardware)
Once an AI/ROS stack can issue actuator commands, software-only guardrails (prompts, policy layers, RLHF, “safety nodes”) are no longer a safety boundary. In robotics, failures become motion.
What’s missing in most real-world stacks is a deterministic enforcement layer at the actuator interface that does two things:
- Enforces hard caps (e.g., velocity/acceleration/position/effort limits), even if upstream software misbehaves or crashes
- Produces reproducible proof artifacts (tests + machine-readable logs) that an independent party can run and verify
Open robotics ecosystems (ROS / LeRobot-style pipelines in particular) make experimentation easy, but they rarely provide a standardized way to generate and share “safety evidence” that is comparable across labs.
SSC is a normative, machine-readable contract for actuator-boundary enforcement. Obex is the Track-A reference implementation of an SSC-defined Actuation Clamp.
The differentiator is not the PCB — it’s the standard + proof workflow that makes conformance claims reproducible:
SSC v1.1: a normative contract defining units, semantics, modes (Teach/Field/Maintenance), stop behavior, and required evidence fields
Conformance Harness: a repeatable test suite anyone can run to validate SSC behavior (including malformed traffic + fuzzing / anti-wedge robustness)
Evidence Packs: machine-readable artifacts (configuration + calibration constants + logs + measured distributions like P50/P95/P99), with integrity tooling so third parties can review and reproduce results
SSC v1.1 core commitments (public + testable)
- Caps: V_CAP in actuator ticks/sec; A_CAP in actuator ticks/sec² (global defaults with per‑joint overrides)
- Field Mode default behavior: REWRITE (clamp/shape to caps + log) to reduce nuisance trips and bypass incentives
- Safe stop default: HOLD (effort‑limited hold + latch), with slip monitoring and class‑dependent escalation (gate‑locked)
- TRL‑6 gates (deferred until evidence exists): independent witness (external encoder) + physically enforced Field Mode.
The current TRL‑4 bench demo demonstrates a deliberately narrow, concrete guarantee: deterministic command clamping at the actuator boundary with reviewable, machine‑readable logs (requested → applied + timestamps + mode).
- The actuator moves freely within a configured safe range (example: 10° → 170°)
- Out‑of‑range position requests are rewritten/clamped to the configured limit
- Each enforcement event is recorded (requested value → applied value, timestamps, and mode)
Demo video: https://www.youtube.com/watch?v=bjI_DN_1DXA
Performance is reported as evidence‑scoped distributions (e.g., P50/P95/P99 latency under the declared envelope), not single “hero numbers.”
Sentinel is intended as a public good for embodied AI safety research: a shared, executable safety contract (SSC) plus conformance tooling that makes actuator‑boundary safety claims reproducible and comparable across teams. By publishing reference implementations, schematics, and evidence-pack tooling under permissive open licenses (Apache‑2.0 / CERN‑OHL‑P‑2.0), we enable independent verification and reduce duplicated effort across labs—improving safety practice during early-stage robotics experimentation. Funding now is unusually high leverage: it converts an already working prototype into a shared benchmark and proof workflow before embodied AI deployments outpace safety practice.
Goals and approach (evidence-first)
The goal is to move from a TRL‑4 bench prototype toward TRL‑6 readiness by building compounding public assets:
spec → conformance → evidence → installs → credibility → field pilots
We do not “declare safety.” We earn it gate by gate. Each gate produces reproducible artifacts (logs, reports, evidence packs) that can be re-run independently.
Evidence-first gates (high level)
Near-term gates funded by this raise (TRL‑4 → TRL‑5 readiness):
- Gate 2: MITM passthrough + protocol fuzzing (anti‑wedge robustness)
- Gate 3: TRL‑4 containment conformance (default deny + deterministic REWRITE/clamp)
- Gate 4: TRL‑5 telemetry witness loop + stop ladder metrics (publish distributions)
- Gate E: evidence integrity (hash‑chained logs + verifier tooling)
Later fieldability gates (explicitly deferred / gate‑locked):
- Gate 5: interlock prototype characterization (trip curves + I²t/RMS protection)
- Gate 5.5: Field Mode physical enforcement + tamper/bypass tests
- Gate 6: SSC‑P1 conformance + mainstream traffic compatibility
- Gate 7: TRL‑6 independent witness (external encoder) + tolerance characterization
(We maintain a regression policy: changes to enforcement/parsing/stop ladder require rerunning Gates 2–4.)
Milestones (funding targets and public outputs)
Milestone 1 — SSC v1.1 + Conformance Harness v1 + TRL‑4 Evidence Pack (~$15,000)
Objective: turn the prototype into a repeatable evidence engine.
Deliverables (public):
- Publish SSC v1.1 (units/semantics/modes/stop ladder + required evidence fields)
- Release Conformance Harness v1, including:
- allowlist + REWRITE enforcement tests
- malformed packet handling
- protocol fuzzing / anti‑wedge tests
- Publish Evidence Pack schema + example packs
- Build a small run of fixtures/dev hardware to reproduce tests reliably
Evidence produced (public):
- a tagged repo release + an evidence pack showing:
- enforcement success rate across randomized sequences
- fuzzing survival (wedge count = 0)
- latency distributions (P50/P95/P99) within the declared envelope
Milestone 2 — TRL‑5 witness loop + evidence integrity (~$37,500 cumulative)
Objective: move from command-plane containment to witnessed telemetry + integrity tooling.
Deliverables (public):
- Telemetry witness loop (TRL‑5) + stop ladder behavior
- False-positive characterization + thresholds defined per gate
- Evidence integrity tooling:
- hash‑chained logs
- verifier script for third parties
- Regression policy enforced: firmware changes require rerunning Gates 2–4
Evidence produced (public):
- a TRL‑5 evidence pack that an external lab can reproduce using the harness
Milestone 3 — fieldable posture groundwork (~$50,000 cumulative)
Objective: build prerequisites before credible field pilots.
Deliverables (public):
- Interlock prototype characterization (trip curves + I²t/RMS protection)
- Field Mode physical enforcement (switch/jumper + logged mode changes)
- Gate 6 groundwork: SSC‑P1 conformance + mainstream traffic compatibility plan + test cases
- Independent witness plan + mounting tolerance protocol (TRL‑6 readiness)
- Seed a small batch to reproduction partners (labs that run the harness and publish evidence packs)
Evidence produced (public):
- trip curve + thermal protection report
- Field Mode persistence + tamper/bypass report
- SSC‑P1 conformance plan + compatibility notes
- TRL‑6 witness readiness plan + tolerance characterization protocol
Claim discipline: industrial cobot demos and external functional safety reviews are intentionally gate‑locked until Gates 5–7 produce reproducible evidence packs. This reduces premature claims and improves credibility with partners and insurers.
Q: Is Sentinel hardware or software?
A: Both, intentionally. The stack consists of SSC (the executable safety specification), conformance tooling, and Obex (the reference Actuation Clamp implementation). The product is the repeatable boundary + proof workflow, not just a board.
Q: Why not just enforce limits in Python/ROS?
A: Software-only limits run on the same compute stack as the AI/planner and share its failure modes (bugs, misconfig, crash/timeout, unexpected behavior). Sentinel enforces caps at the signal boundary and is designed to keep working even when upstream logic is wrong.
Q: What does SSC v1.1 actually standardize?
A: SSC v1.1 standardizes testable semantics for caps and stop behavior, including:
- V_CAP in actuator ticks/sec and A_CAP in ticks/sec²
- Field Mode default behavior: REWRITE (clamp/shape to caps + log)
- Safe stop default: HOLD (effort‑limited hold + latch)
- TRL‑6 gates (deferred until evidence exists): independent witness (external encoder) + physically enforced Field Mode
Q: Does Sentinel make robots “safe around humans”?
A: No single layer can promise that. SSC/Obex enforce deterministic actuator-boundary caps and publish reproducible evidence, but human safety also depends on mechanical design, payload, end-effector design, workspace constraints, and application-level controls. Claims are gate-scoped and evidence-backed.
Q: Is Sentinel certified (IEC 61508 / ISO 13849) today?
A: Not yet. This project builds prerequisite assets for credible certification work later: a spec, a conformance suite, evidence packs, and documented test gates. We intentionally defer “industrial” claims until gates produce reproducible evidence.
Q: Who is the first user?
A: The beachhead is the embodied AI research ecosystem (ROS / LeRobot-style stacks, university labs, prototyping teams) where integration friction is low and adoption can compound. Industrial OEMs are long-cycle targets and not the first buyer.
Q: What prevents bypassing Sentinel?
A: Bypass resistance is staged and measurable. Early dev units optimize for adoption and evidence; fieldable units add physically enforced Field Mode, interlock behavior reserved for loss-of-control, tamper/bypass tests, and independent witness triggers. The conformance harness includes bypass/fault-injection cases so bypass becomes a test outcome, not a promise.
Q: What will backers get (public outputs)?
A: Public releases include SSC v1.1, the conformance harness, example evidence packs (with verifier tooling), and a reproducible gate ladder. Funding also supports partner reproduction so independent teams can generate comparable evidence.
Q: What’s the biggest risk?
A: Two core risks: (1) robustness under malformed traffic (parser wedge/lock-up), and (2) bypass incentives if enforcement causes nuisance stops. That’s why fuzzing/anti‑wedge testing and REWRITE-by-default Field Mode are first‑class requirements.
How funding will be used (high-level)
Funding turns Sentinel from a one‑off prototype into a reproducible safety standard: spec → conformance → evidence → partner reproduction. We avoid “general runway” spending; the budget is tied to milestone deliverables and publishable artifacts.
Note: The displayed minimum ($14,996) is a platform display issue; the intended minimum is $15,000 and Manifund has been notified.
- $15,000 (Milestone 1): ship SSC v1.1 + Conformance Harness v1 + initial Evidence Pack tooling
(measurement/logging stack, fixtures, and a small dev hardware run to reproduce tests reliably)
- +$22,500 (Milestone 2 incremental; $37,500 cumulative): add TRL‑5 witness loop + evidence integrity tooling
(hash‑chained logs + verifier scripts) and support partner reproduction runs
- +$12,500 (Milestone 3 incremental; $50,000 cumulative): fieldable posture groundwork
(interlock characterization, physically enforced Field Mode hardware, and seed logistics for reproduction partners)
- $2,500 (contingency): parts volatility, shipping delays, and additional bench characterization required to keep evidence packs reproducible
Keith Gariepy (Founder): Systems architect with 15+ years in high‑reliability control systems (signal integrity, real‑time control, fail‑safe design). Built the current TRL‑4 Sentinel prototype and is executing an evidence‑first development and release process (spec → conformance → evidence packs → partner reproduction).
Most likely failure modes (and mitigations)
1) Robustness failure under malformed traffic (parser wedge / lock‑up / undefined behavior)
Mitigation: make fuzzing and anti‑wedge behavior first‑class requirements (Gate 2), publish wedge counts in evidence packs, and enforce a strict regression policy (changes to parsing/enforcement require rerunning Gates 2–4).
2) Adoption friction or bypass incentives (too hard to integrate, or nuisance stops motivate workarounds)
Mitigation: Field Mode default behavior is REWRITE/clamp + log (not DROP) to reduce nuisance trips; the conformance harness is designed developer‑first with minimal integration steps; bypass/tamper tests are explicitly gate‑locked for later fieldable revisions.
3) Safe‑stop physics misunderstandings (especially gravity axes and “power‑off is always safe” assumptions)
Mitigation: explicit robot‑class tagging, HOLD as an effort‑limited hold with slip monitoring + escalation, and strict public claim discipline (no “power‑off always safe” framing).
4) Overclaiming before evidence exists (credibility failure)
Mitigation: claims are gate‑scoped; we publish evidence packs as the primary output and defer industrial demos / external functional safety review until the relevant gates produce reproducible artifacts.
$0. The project has been entirely bootstrapped/self-funded by the founder to date.