Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
0

RAE: A Safe and Auditable Memory Architecture for AI Agents

Grzegorz-Leniowski- avatar

Grzegorz Leśniowski

ProposalGrant
Closes March 1st, 2026
$0raised
$2,000minimum funding
$10,000funding goal

Offer to donate

26 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

GitHub repository: https://github.com/dreamsoft-pro/RAE-agentic-memory

RAE (Reflective Agentic-Memory Engine) is an open-source memory architecture for AI agents focused on reliability, auditability, and safety.

Instead of treating memory as an unstructured context window, RAE separates information into explicit layers with defined lifecycles, access rules, and consolidation logic. The project targets developers and researchers who need predictable, inspectable agent behavior rather than opaque prompt accumulation.


What are this project's goals? How will you achieve them?

The goal of this project is to develop RAE into a reliable, auditable, and reusable memory architecture for AI agents.

This will be achieved by:

- Clearly defining and stabilizing the layered memory model and APIs

- Improving consolidation, retention, and access control mechanisms

- Adding reproducible tests and benchmarks to detect regressions and leakage

- Providing clear documentation and examples for adopters

The project focuses on engineering discipline, predictability, and long-term maintainability rather than rapid feature growth.

How will this funding be used?

The funding will primarily be used to buy focused development time.

Specifically:

- Stabilizing and documenting core interfaces and contracts

- Improving security posture and safe-by-default configurations

- Expanding tests, benchmarks, and reproducibility tooling

- Improving documentation and onboarding materials

The goal is to turn RAE from a working system into a polished, reusable open-source component.

Who is on your team? What's your track record on similar projects?

I am currently the sole core maintainer and developer.

I have many years of experience building and maintaining production software in the printing and industrial automation domain, where reliability, traceability, and long-term maintainability are critical. I have designed, implemented, and operated complex systems end-to-end (architecture, backend services, databases, deployment, and maintenance).

RAE is not a greenfield idea: it is an active open-source project with a working codebase, documentation, tests, and real users. I have a strong track record of shipping and maintaining software over long time horizons rather than producing short-lived prototypes.

What are the most likely causes and outcomes if this project fails?

The most likely causes of failure are limited maintainer time and insufficient funding to fully harden and document the system.

If the project fails, the outcome would primarily be opportunity cost: RAE would remain a working but less polished internal tool rather than a reusable, well-documented open-source component. No negative externalities or safety risks are expected from failure.

How much money have you raised in the last 12 months, and from where?

$0 external funding.

The project has been developed using my own time and personal resources.

Comments1OffersSimilar3
Piotr avatar

Piotr Zaborszczyk

1 day ago

Grzegorz, is it for AI safety? To me it looks mostly like AI capabilities - and thus not a charitable project.
But maybe I'm missing something. If in fact it is something like >90% safety and <10% capabilities, please do explain. It's not enough to design new AI capabilities that are auditable/inspectable for it to count as "AI safety work".