Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

A Governed AI Research Platform for Cross-Domain Scientific Discovery

Science & technology
Lisa-Intel avatar

Pedro Bentancour Garin

ProposalGrant
Closes January 16th, 2026
$0raised
$75,000minimum funding
$200,000funding goal

Offer to donate

15 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project description

Lisa Intel is building a governed AI research and discovery platform that enables autonomous, cross-domain scientific innovation while maintaining safety, auditability, and control.

Recent MIT research shows that scientific foundation models converge across domains, enabling transferable representations between chemistry, materials science, biology, and beyond.

📄 MIT article: https://www.alphaxiv.org/abs/2512.03750

Our project focuses on operationalizing this insight into a real system:

What we are building

A modular AI research platform coordinating multiple domain-specific models

Autonomous cross-domain discovery loops (hypothesis → validation → iteration)

A built-in governance layer for safety, auditability, and policy alignment

Structured outputs suitable for validated research artifacts and IP generation

Unlike existing tools, Lisa Intel focuses not on a single model or domain, but on the control plane that governs how advanced AI research systems operate safely at scale.

https://lisaintel.com

Project summary

This project develops a governed, autonomous AI research platform that turns cross-domain foundation model capabilities into safe, auditable scientific discovery. The goal is to create infrastructure that enables innovation without sacrificing control or safety.

What are this project's goals? How will you achieve them?


Goals

Build a reference implementation of a governed AI research system

Demonstrate safe, autonomous cross-domain discovery workflows

Establish a foundation for future standardization and policy alignment

How

Design and implement a modular orchestration architecture

Integrate governance mechanisms (audit logs, control policies, fail-safes)

Validate the system through simulated and pilot research tasks

How will this funding be used?


Funding will support:

Core system architecture and orchestration development

Governance and safety mechanism design

Validation through simulated research pipelines

Legal and technical work related to patent hardening and documentation

With $75k (minimum):

Deliver a functional architecture + governance prototype

With $200k (full funding):

Build a robust reference system

Run cross-domain validation experiments

Prepare the platform for external pilots and collaborations

Who is on your team? What's your track record on similar projects?


Pedro Bentancour Garin – Founder & Lead Architect

Background spanning:

AI governance and safety systems

Multidisciplinary research coordination

Prior work on large-scale, system-level technology concepts

Alexis Podolny - Strategic Advisor

Previously at startup "TheyDo", raised €54M in two years.

The project is supported by ongoing dialogue with researchers, policy actors, and AI governance initiatives in the US and EU.

What are the most likely causes and outcomes if this project fails?


Likely causes

Insufficient funding to fully validate system-level assumptions

Slower-than-expected integration across domains

Outcomes

Even partial success yields valuable governance frameworks

Architecture and insights remain reusable for future AI safety and research infrastructure efforts

Downside risk is limited; upside impact is high.

How much money have you raised in the last 12 months, and from where?


$15000 raised to date from family and friends

$50000 from loans

Project currently founder-funded and pre-seed

CommentsOffersSimilar6
Lisa-Intel avatar

Pedro Bentancour Garin

Global Governance & Safety Layer for Advanced AI Systems - We Stop Rogue AI

Building the first external oversight and containment framework + high-rigor attack/defense benchmarks to reduce catastrophic AI risk.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 raised
wiserhuman avatar

Francesca Gomez

Develop technical framework for human control mechanisms for agentic AI systems

Building a technical mechanism to assess risks, evaluate safeguards, and identify control gaps in agentic AI systems, enabling verifiable human oversight.

Technical AI safetyAI governance
3
8
$10K raised
🐢

Kaynen B Pellegrino

Support SyberSuite: The first real Governance Platform for AI

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised
agusmartinez92 avatar

Agustín Martinez Suñé

SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents

Seeking funding to develop and evaluate a new benchmark for systematically assesing safety of LLM-based agents

Technical AI safetyAI governance
5
2
$1.98K raised
🐸

SaferAI

General support for SaferAI

Support for SaferAI’s technical and governance research and education programs to enable responsible and safe AI.

AI governance
3
2
$100K raised
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised