You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
## Project summary
I am Feyzi Engin Ağır, an independent builder from Türkiye working on AI governance, decision safety, human oversight and public-interest digital tools.
This project will create a small public AI Governance Evidence Kit based on an early Engürü Lab prototype. The purpose is to make trustworthy AI governance more practical, reviewable and understandable for small organisations, public-interest teams and early-stage builders.
The kit will include:
- a short risk-mapping note;
- a human oversight checklist;
- review templates;
- screen evidence from the current prototype;
- a product-readiness note;
- a public learning note.
The goal is modest and concrete: turn an early governance concept into visible evidence that can be reviewed, shared and improved.
## What are this project's goals? How will you achieve them?
The first goal is to create a small but useful evidence kit for trustworthy AI governance. Many organisations talk about responsible AI, human oversight and accountability, but they often lack simple evidence structures they can actually use.
I will prepare the first version of the kit by reviewing the current Engürü Lab prototype, selecting the most important governance points, documenting the risk areas, writing a human oversight checklist and collecting screen evidence. I will then publish a short public learning note explaining what was built, what was learned and what should come next.
The project will be completed in a small sprint. The deliverables will be practical documents rather than a large platform: risk note, checklist, templates, screenshots and learning note.
## How will this funding be used?
Funding levels:
- $500: mini proof note + human oversight checklist.
- $1,000: evidence kit v0.3 + screen evidence + public learning note.
- $2,500 stretch goal: one short external review note + pilot-readiness material + stronger review templates.
The $1,000 target will be used for:
- digital tools and documentation support;
- preparation of the evidence kit;
- screen evidence collection;
- public learning note writing;
- small expert feedback or review support;
- basic production and publishing needs.
This is a small first-breath grant. The aim is to turn existing work into reviewable evidence, not to fund a large build.
## Who is on your team? What's your track record on similar projects?
The project is led by me, Feyzi Engin Ağır. I am an independent builder and strategic product designer from Türkiye. I work on practical frameworks for trustworthy AI governance, human oversight, decision safety and fair collaboration.
My current work includes Engürü Lab, an early public prototype environment for AI governance, decision safety and institutional trust tools. I also recently submitted a related open-source public-interest proposal to NLnet / NGI Zero Commons for the Fair Share Collaboration App.
I will contribute the core model, product concept, documentation structure, public-interest framing and project coordination.
## What are the most likely causes and outcomes if this project fails?
The most likely failure mode is scope creep. AI governance can easily become too broad, abstract or institutional. To avoid this, I will keep the project small and deliver a narrow evidence kit rather than a large product.
Another risk is limited external feedback. If I cannot secure an external review within the first funding level, I will still deliver the core evidence kit and mark the review as a stretch goal.
If the project fails, the downside is limited: the output may remain a small internal documentation exercise. If it succeeds, the upside is meaningful: it creates a practical public evidence structure that can support later reviews, microgrants, pilots and stronger AI governance work from Türkiye.
## How much money have you raised in the last 12 months, and from where?
No grant money has been received for this specific project so far.
Recent applications have been submitted to Awesome Foundation, NLnet / NGI Zero Commons and Emergent Ventures, but no funds have yet been awarded. This Manifund proposal is intended as a small, fast, public microgrant request to create the first practical evidence kit while larger applications are under review.