Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Testing a Deterministic Safety Layer for Agentic AI (QGI Prototype)

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
QGResearch avatar

Ella Wei

ProposalGrant
Closes February 2nd, 2026
$0raised
$3,000minimum funding
$20,000funding goal

Offer to donate

9 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

1. The Problem: AI Governance Is Fragmented and Reactive

Today’s governance landscape is a patchwork of incompatible rules, tools, and oversight systems. Every model, company, and jurisdiction reinvents governance from scratch, creating complexity, inconsistency, and gaps that widen as AI accelerates.

2. The Solution: QGI as a Universal, Invariant‑Based Governance Engine

QGI provides a simple top‑level structure because it is structurally abstracted.
It doesn’t remove complexity — it compresses it into:

  • universal principles

  • stable invariants

  • clear separation of concerns

  • deterministic enforcement

QGI consolidates governance into invariant‑based enforcement, enabling consistent oversight across models and contexts.

This follows the same architectural lineage as TCP/IP, the OSI model, SQL, the Linux kernel, and constitutional frameworks — systems that appear simple on the surface because they encode deep complexity into stable, universal structures.

3. The Goal: Build the First Working Prototype

The architecture is complete.
This project will develop the first functional prototype to validate the invariant, demonstrate cross‑model governance, and establish QGI as a foundational layer others can build on.

4. Five Key Benefits of QGI

If successful, QGI could deliver major systemic benefits (mathematically calculated and validated in real-case simulation):

  • Reduce Up to 85% of governance code through a unified invariant layer

  • Cut Up to 70% compute overhead (time & power) by replacing repeated checks with a single-pass evaluation

  • Eliminate Blackbox Reasoning (Full traceability) from action → invariant → principle

  • Instant Adaptation to Regulations Updates (EU AI Act, GDPR, etc.)

  • Zero Retraining, since QGI governs actions, not model weights

Link to foundation intro video:

https://www.youtube.com/watch?v=CkgmpEHCsnQ

What are this project's goals? How will you achieve them?

The goal of this project is to build and test the first working prototype of the QGI deterministic governance engine — a safety layer that evaluates an AI system’s reasoning before it acts, using five universal ethical invariants. This prototype will allow us to empirically validate whether invariant‑based governance can reduce safety overhead, improve transparency, and enforce ethical constraints without retraining models.

To achieve this, I will:

  • Develop a minimal working prototype of the invariant evaluation engine, implementing the five core invariants (Non‑Harm, Autonomy, Opacity‑Limit, Mutual‑Benefit, Evolvability).

  • Integrate the prototype with small‑scale AI tasks to test how QGI intercepts and governs model reasoning traces.

  • Run benchmark experiments comparing QGI’s performance to traditional governance layers, focusing on compute overhead, consistency, and traceability.

  • Evaluate regulatory adaptability, demonstrating how QGI maps invariant outcomes to obligations from frameworks like the EU AI Act and GDPR.

  • Produce a public technical report summarizing results, limitations, and next steps for large‑scale testing with industry partners.

This project transforms QGI from a fully specified framework into a validated, testable governance architecture — the critical step needed before collaborating with major AI labs and service providers.

How will this funding be used?

This funding will be used to build and test the first working prototype of the QGI invariant governance engine. Every dollar goes directly toward transforming QGI from a fully specified architecture into a validated, empirical system ready for collaboration with major AI labs.

The budget supports:

  • Prototype development — implementing the five ethical invariants (Non‑Harm, Autonomy, Opacity‑Limit, Mutual‑Benefit, Evolvability) in a minimal working engine.

  • Evaluation environment — running controlled tests on small‑scale AI tasks to measure safety performance, compute overhead, and consistency.

  • Benchmarking & analysis — comparing QGI’s deterministic governance layer to traditional safety filters, including efficiency gains and zero‑retraining behavior.

  • Regulatory mapping tests — demonstrating how QGI adapts to frameworks like the EU AI Act, GDPR, and PIPEDA through jurisdiction profiles.

  • Documentation & reporting — producing a clear technical report summarizing results, limitations, and next steps for large‑scale testing with industry partners.

This funding enables the critical transition from theory to practice — the moment where QGI becomes testable, measurable, and ready for adoption by organizations facing the growing governance bottleneck in AI deployment.

Who is on your team? What's your track record on similar projects?

This is a solo project. I am an independent AI governance architect based in Montreal, specializing in deterministic safety systems, compliance frameworks, and high‑complexity technical design. My background spans many years as a business analyst, data analyst, and project manager in the IT industry, where I translated complex business models into technological solutions and collaborated closely with AI and data science teams.

Over the past year, I have developed the full QGI architecture — a tiered, invariants, and mathematically‑defined governance system designed to enforce universal safety constraints at runtime. This includes the formal specification of the five ethical invariants, the tiered governance structure, the jurisdictional mapping layer, and the system flowcharts used for implementation.

I have extensive experience in:

  • designing governance and compliance architectures

  • building technical frameworks from first principles

  • creating implementation‑ready diagrams, logic flows, and mathematical notation

  • managing complex, multi‑stakeholder technical projects

  • communicating scientific and technical concepts clearly to diverse audiences

QGI is the culmination of this work: a fully specified governance architecture now ready for empirical testing. This project represents the next step — transforming the architecture into a working prototype and validating its performance on real AI tasks.

What are the most likely causes and outcomes if this project fails?

1. Integration Resistance

Cause: The AI industry is currently optimized for LLM “patching” (RLHF, safety fine‑tuning, classifier stacks). A full architectural shift toward invariant‑based governance may face resistance because it requires rethinking existing pipelines and deployment workflows.

Mitigation: Design QGI as a plug‑in supervisor layer that intercepts reasoning traces without replacing or retraining existing models. This lowers adoption friction and allows QGI to integrate into current infrastructures as an add‑on rather than a paradigm shift.

2. Complexity Paradox

Cause: In attempting to simplify AI ethics into universal “gravity‑like” invariants, the system may become too rigid to handle the nuance, ambiguity, and contextual flexibility required for natural language reasoning.

Mitigation: Use a Hybrid‑Logic architecture where QGI provides deterministic guardrails and constraint enforcement, while the underlying LLM retains responsibility for creative, contextual, and linguistic interpretation. This preserves flexibility while ensuring safety.

3. Edge Cases Not Considered Enough

Cause: Early versions of the invariant engine may fail on rare or adversarial edge cases that were not anticipated during initial design. These failures could produce false positives (blocking safe actions) or false negatives (allowing unsafe ones), making it difficult to evaluate QGI’s reliability.

Mitigation: Adopt a progressive stress‑testing approach, starting with simple synthetic tasks and gradually introducing adversarial, ambiguous, and high‑variance scenarios. Each failure becomes a data point for refining the invariants, improving the evaluation logic, and expanding the edge‑case library. This ensures that early gaps strengthen the system rather than undermine it.

How much money have you raised in the last 12 months, and from where?

I am just beginning the fundraising process for QGI, and this Manifund project is the first public funding request for the prototype and validation phase. Until now, all work on the QGI architecture — including the full invariant design, tiered governance structure, and implementation‑ready diagrams — has been self‑funded.

This project represents the transition from privately developed research to community‑supported empirical testing. The goal is to build the first working prototype and generate the initial validation results needed for collaboration with major AI labs and research institutions.

CommentsOffersSimilar4

No comments yet. Sign in to create one!