Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
0

QAI-QERRA: Open-Source Quantum-Ethical Safeguards for AI Misalignment

Science & technologyTechnical AI safetyGlobal catastrophic risks
Marussa-QAI- avatar

Marouso Metocharaki

ProposalGrant
Closes February 15th, 2026
$0raised
$15,000minimum funding
$30,000funding goal

Offer to donate

39 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Independent researcher from Greece advancing QAI-QERRA, an open-source (MIT-licensed) quantum-ethical framework for safeguards against misalignment in advanced AI and humanoid robots.Key features include real quantum entanglement (verified 8+ qubit W-states via IBM/PennyLane), proactive remorse simulation (Vectors 9/10 with perfect -1.00 alignment scores), distributed entanglement for fair allocation, and PEMEV-11 vectors enforcing planetary equity. Live on GitHub: https://github.com/marunigno-ship-itThis provides verifiable, proactive tools to reduce deceptive misalignment risks — fully transparent and adoptable.

What are this project's goals? How will you achieve them?

Goals (12-18 months):

  • Scale ethical vectors with noise robustness and larger quantum simulations (16+ qubits).

  • Implement multi-agent validation in simulated humanoid environments.

  • Achieve TRL 6 (prototype validation in relevant environments).

  • Release improved documentation, example circuits, and tutorials for community adoption/contributions.

  • Produce measurable alignment improvements (e.g., reproducible -1.00+ scores on remorse horizons).

How:

  • Full-time refinement using cloud quantum access (IBM, AWS Braket) and PennyLane.

  • Iterative testing/documenting on open repos.

  • Share progress publicly for feedback (X, GitHub, alignment forums).

This advances open-source safeguards prioritizing truth, fairness, and human-flourishing — directly reducing long-term AI risks.

How will this funding be used?

Requesting $30,000 (flexible: $20k minimal for basics, $50k ideal for accelerated compute).Breakdown:

  • 60% Personal runway/stipend (~€1,400–2,000/month): Independent furnished rental outside Athens, stable WiFi, food, healthcare/dental buffer, basic taxes — enabling full-time focus amid current unstable environment and Greek economic challenges.

  • 30% Compute resources: Cloud quantum simulators/GPU access (e.g., extended IBM credits, AWS) for scaling vectors and robustness testing.

  • 10% Contingency/tools: Laptop upgrade if needed, minor emergencies.

All spending transparent; no team salaries yet.

Who is on your team? What's your track record on similar projects?

Team: Solo independent researcher (Marouso Metocharaki /

@marunigno

) — no team yet to keep overhead zero and focus pure.Track record:

  • Built and released full QAI-QERRA prototype open-source, including verifiable quantum circuits and perfect -1.00 remorse vector scores.

  • Consistent solo progress on complex hybrid quantum-ethical code despite resource constraints.

  • Submitted to EIC Accelerator 2026 (short proposal with video pitch).

  • Resilient self-directed execution in deep-tech AI safety as an indie builder.

What are the most likely causes and outcomes if this project fails?

Most likely causes of failure:

  • Insufficient runway leading to survival interruptions (current unstable housing/economic barriers forcing slower/part-time work).

  • Limited compute access stalling larger-scale quantum tests.

Outcomes if fails:

  • Progress slows significantly — core repos remain open-source, but scaling/validation delayed (e.g., no TRL 6, fewer community contributions).

  • Valuable safeguards (remorse simulation, equity vectors) advance more gradually via spare time.

  • No catastrophic loss — code stays public and verifiable for others to build on.

Risks are low-impact due to open-source nature; funding primarily accelerates timeline and depth.

How much money have you raised in the last 12 months, and from where?

$0 formal funding in the last 12 months.Self-funded via personal resources/time. Applied to other grants (e.g., LTFF Dec 2025, EIC Accelerator, small microgrants) but no awards yet.

CommentsOffersSimilar4
QGResearch avatar

Ella Wei

Technical Implementation of the Tiered Invariants AI Governance Architecture

Achieving major reductions in code complexity and compute overhead while improving transparency and reducing deceptive model behavior

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
Lycheetah avatar

Mackenzie Conor James Clark

AURA Protocol: Measurable Alignment for Autonomous AI Systems

An open-source framework for detecting and correcting agentic drift using formal metrics and internal control kernels

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $75K
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $23.5K