You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Graev is an open-source, AI-powered web application designed to bridge the operational gap between EA grantmakers and grant applicants. I am seeking $6,000 to secure a 12-month API runway, eliminating critical rate-limiting bottlenecks while refining our evaluation rubrics alongside expert grantmakers.
The EA funding ecosystem is a rigorous, data-driven landscape that prioritizes marginal impact and impartial benevolence over traditional philanthropic motives. Operating through a bifurcated structure of large aggregators and specialized funds (like EAIF and Coefficient Giving), it utilizes an "operating system" of BOTECs and theories of change to ensure every dollar achieves the highest possible expected value.
Navigating this complex machinery is incredibly difficult for first-time applicants, particularly those from Low and Middle-Income Countries (LMICs), who often fail to secure funding simply due to poor proposal structuring. Simultaneously, community organizers and grantmakers spend disproportionate hours evaluating and providing feedback on these proposals. Graev navigates this complex machinery by helping applicants master these specific metrics to secure funding for high-impact projects.
Graev functions as a dual-purpose tool powered by Gemini 2.0, trained on the public evaluation frameworks of top EA funders (GiveWell, Coefficient Giving, and EA Funds). I have two primary goals:
Goal 1: Reduce time spent by grantmakers. By automating the initial deep-dive analysis of proposals, Graev radically reduces the manual hours community organizers and fund managers spend pre-screening inbound applications.
Goal 2: Increase funding chances for first-time applicants. Many LMIC applicants, as well as first-time applicants, have highly impactful projects but struggle to structure strong, EA-aligned proposals. Graev acts as an accessible assistant to help them iterate on their drafts before submission.
How we will achieve this: Currently, Graev successfully processes evaluations but is bottlenecked by free-tier API rate limits. We will achieve our goals by:
Upgrading to a paid, high-capacity API compute budget to solve the rate-limiting issues.
Creating cause-area specific rubrics (e.g., separating AI Safety rubrics from Global Health).
Including a human feedback loop to allow users to refine the AI's verdicts.
Creating user accounts so applicants can save and iterate on their proposals.
Collaborating with expert grant evaluators and granting organizations to strengthen the rubrics and align them with current grantmaking trends.
I am requesting $6,000 to transition Graev from a heavily rate-limited MVP into a robust, enterprise-ready platform.
API Compute Budget ($3,000): Purely allocated to secure a high-capacity, 12-month API runway.
Expert Consultation Stipend ($2,000): I aim to use this stipend to contract an experienced grant analyst or fund manager to formally audit and update the evaluation rubrics, ensuring they align tightly with current granting heuristics.
Development Stipend ($500): Structured as a student stipend, this stipend buys out my time from outside work, allowing me to dedicate focused engineering hours to build out the human feedback loop, user accounts, and cause-area-specific rubrics.
Domain & Hosting ($500): Covering custom domain registration and premium frontend/backend hosting costs to ensure zero downtime over a 2-year period.
The API Compute: Here is how the $3,000 API math breaks down conceptually to support global EA adoption:
The Token Math: Grant proposals are heavy on input context but light on output generation. A standard 15-page grant proposal, plus the EA evaluation rubrics fed into the prompt, totals around 20,000 to 40,000 tokens per evaluation. Graev's structured output is much shorter, around 1,000 to 2,000 tokens. Because grant evaluation requires high logical rigor, Graev routes requests through heavier "Pro" class models, which cost a few cents per evaluation.
Scaling the Volume: The tool attracted over 100 users from the EA community in just 5 days after launch. A budget of $250 a month provides a massive safety net for this kind of high-volume, multi-user adoption, totaling $3,000 for a 12-month runway.
Future Sustainability: This API runway buys 12 months of high-speed execution. However, based on how well the tool performs after these upgrades, my long-term vision is to make Graev entirely self-sustaining.
I, Habeeb Abdulfatah (Lead Developer & Maintainer) combine technical engineering capacity with deep, ground-level experience in the Effective Altruism and AI Safety ecosystems.
I currently work on the AI evaluations team for an AI startup, giving me direct professional experience in building and deploying evaluation frameworks.
I serve as the President of the EA Club at Ahmadu Bello University, where I have successfully raised and managed community grants for over 2 years.
I am the Policy Lead for PauseAI Nigeria.
I co-founded AI Safety Hub Nigeria.
I am an active member of EA Nigeria.
The Cause: The most likely cause of failure is insufficient funding to cover API compute costs, leading to persistent rate-limiting errors when traffic spikes.
The Outcome: If this project fails to secure funding, Graev remains a stunted prototype. Grantmakers will continue to lose time to manual evaluations. More importantly, highly impactful projects from first-time LMIC applicants will likely continue to be rejected due to a lack of accessible, structural guidance during the application process.
Zero external funding has been raised for the software development or infrastructure of Graev. It has been entirely bootstrapped using personal engineering hours and free-tier infrastructure.
Link to Graev: https://graev.netlify.app/