You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Project Pitch
Black in AI Safety and Ethics (BASE) is a new AI safety organization launched ~2 months ago, to advance the AI safety field and produce high-impact research by empowering Black-identifying AI safety researchers and field-builders.
In those couple of months, we have had:
Our Slack community has grown to over 250 people
400+ fellowship interest form submissions
50+ mentor applications,
30,000+ LinkedIn views on the fellowship interest form announcement post, 382 engagements, and 70 reposts
A consistent twice-a-week paper reading group with one session optimized for EST/PST and another for WAT/EAT timezones
On Dec 16, we announced our fellowship application announcement post on LinkedIn and received the following information in one day:
Over 59,000 views, 672 engagements, and 178 reposts
Over 250+ people signed up for our Q&A
50+ application applications
Theory of Change
BASE’s theory of change begins with the understanding that the development of advanced AI will shape major aspects of society, from institutions and governance to work, decision-making, and knowledge systems. Our long-term aim is to reduce the risk of severely negative outcomes from advanced AI by addressing the gradual disempowerment that occurs when Black scholarship, professional participation, and research involvement are absent from the discussions shaping these technologies. When this exclusion happens, the questions asked and the risks acknowledged are narrowed, leading to long-term patterns of disempowerment. This occurs not as the result of a single moment of disengagement, but through the cumulative effect of missed perspectives, missed opportunities, and the absence of Black leadership.
BASE creates learning, mentorship, and research pathways that prepare Black participants to contribute to the development, evaluation, and governance of advanced AI. The Fellowship supports individuals as they build their foundation and gain guided experience. At the same time, the Research Working Group provides opportunities to identify research gaps, examine emerging issues, and conduct work informed by BASE’s interdisciplinary approach. Together, these efforts provide the structure and support needed for Black participants to enter the AI ecosystem, conduct meaningful research, and assume roles that shape the governance and understanding of advanced AI.
Over time, this approach will expand the influence and representation of Black scholars, professionals, and researchers working in AI safety and ethics. As more individuals contribute to research and evaluation, the field becomes better equipped to address a broader range of risks and pursue more responsible approaches to advanced AI. By building these pathways, BASE aims to interrupt the long arc of disempowerment and support a future in which Black insight and leadership guide the development of advanced AI.
Why this Program
We estimate the counterfactual value of this grant to be extremely high for three reasons:
According to 80,000 Hours, only a few thousand people worldwide are engaged in technical approaches to reducing existential risks from advanced AI systems, and the OECD has reported that global demand for AI governance and security expertise is already outpacing supply.
Many of the researchers we have identified are currently outside the traditional EA/AI safety network. They are unlikely to be selected by Coefficient Giving or similar funders in the near term due to geographic distance. Without this specific fellowship, research from talent in this demographic simply will not pivot into AI safety work.
These experts have high market value. The counterfactual to them working on AI safety is likely that they would work on accelerating AI capabilities in industry. This grant serves as a pivot point to capture this talent for the alignment ecosystem before they are locked into capabilities work.
If we want to make serious progress on a global pause or safety treaty, we should have broad involvement from researchers in the global majority (e.g., Africa and Latin America). Developing an ecosystem of Black AI safety experts is a critical governance asset for future international negotiations.
Finally, we need more people contributing their expertise and perspectives to address complex problems. Current safety teams share similar backgrounds, increasing the risk of correlated errors in threat modeling. We propose reducing this epistemic risk by recruiting uncorrelated, highly competent researchers from the Black community.
What are our goals / How will we achieve them
To launch our first Fellowship cohort, we are seeking support from our crowdfunding community. Our goal is to mentor 20-40 Fellows in Spring 2026, rapidly upskilling experts in AI safety. We plan to have Fellows become more well-versed in AI safety topics by:
4 weeks of structured training using the ARENA, BlueDot, Oxford AI Governance, and AI Security Bootcamp materials
8 weeks of mentored research projects across the BASE streams
Continued support for placements into internships, fellowships (MATS, PIBBSS, Astra, SPAR, ERA, etc.), research grants, or early-career roles
Ongoing application review, career guidance, and introductions to hiring managers and researchers
We have identified a range of highly skilled researchers interested in entering AI safety and ethics research and want to provide them with the mentorship and training they need to pursue careers in this field.
Fellowship Goals
The BASE Fellowship is designed to produce clear, measurable outputs that reflect the development, preparation, and progress of Black scholars, professionals, and researchers engaged with advanced AI. Our expected outputs include:
A cohort of highly motivated fellows who begin developing early research directions within one of the three BASE streams (AI Alignment, AI Governance, AI Security).
Fellows who demonstrate the ability to work collaboratively with mentors and peers in a structured research environment.
Strengthening foundational skills and judgment needed to contribute to AI safety, governance, and security.
A meaningful expansion of each fellow’s AI safety network, including mentors, peers, researchers, and hiring managers.
Access to ongoing mentorship and professional support that continues beyond the 12-week fellowship.
Targeted pathways into next-stage opportunities, including internships, research positions, policy placements, and fellowships such as MATS, Astra, ERA, and similar programs.
Greater exposure to the broader AI safety ecosystem, supported through mentor engagement and BASE’s growing partner network.
Measuring Impact
At least 80% of fellows complete the pilot year across the three BASE streams.
At least 70% of fellows complete a mentored research project and publish a public blog post or written output.
At least 60% of fellows transition into next-stage opportunities, including internships, research assistantships, policy roles, industry positions related to AI safety/governance, or fellowships such as MATS, Astra, ERA, and others.
At least 80% of graduates would recommend the fellowship to others, reflected in a Net Promoter Score (NPS) of 8 or higher.
Fellows expand their professional networks, meeting 10–15 new contacts (researchers, mentors, hiring managers) with whom they feel comfortable approaching for support or collaboration.
At least 70% of mentors report high satisfaction with fellow preparedness, engagement, and project development.
Growing external recognition of BASE, demonstrated by inbound requests from labs, research organizations, and fellowship programs seeking referrals or collaboration with BASE fellows.
Research Goals
Our organization is sourcing mentors from and focusing on research around systemic safety and robustness. We want to encourage work that identifies blind spots in current model evaluations, with a focus on failure modes that occur in underrepresented data distributions. We are also excited about working in technical governance: methods/requirements for proving models remain aligned and reliable even when deployed in high-complexity, real-world social environments.
Measuring Impact
Number of research outputs produced by fellows (expected 15-20 high-impact research outputs over three years)
Number of major research reports from the Research Working Group (expected 5–7)
Percentage of fellows entering AI safety, security, governance, or ethics roles (target >70 percent)
Strength and engagement of the BASE community
Formal partnerships with AI labs and research organizations
How this funding will be used (6-month launch funding):
$155,400 — Staff salaries (3 FTE @ $45/hr for 6 months)
$144,000 — Fellow stipends (30 fellows at 20 hrs/week for 12 weeks)
$7,000 — Operational tools and software (Slack, Canva, Airtable, ChatGPT, Zoom, G-Suite, website, Canvas LMS)
$750 — Coworking and workspace costs
$43,800 — 15% buffer for administrative and unforeseen expenses
Total Requested: $350,950
Next Six Months
Open and review fellowship applications
Run the complete 12-week fellowship program
Support research projects through to publication
Establish the BASE Research Working Group
Grow mentorship and partnership networks
Establish long-term partnerships and funding streams
Prepare for the second cohort cycle (Fall 2026)
Our founding team comprises three members with complementary experience in AI safety research, governance, cybersecurity, and program leadership, united by a shared focus on high-impact work.
Lawrence Wagner brings over ten years of experience in project management, cybersecurity, and entrepreneurship, including designing and running training programs. He previously served as a Research Manager with the ML Alignment & Theory Scholars (MATS) program, where he coordinated AI safety research projects and supported scholars and mentors through the research lifecycle. He has also conducted research at UC Berkeley focused on AI governance, risk management, and the intersection of technical systems with policy and cybersecurity considerations.
Abigail Yohannes is a data analyst and research manager with experience across AI safety, threat detection, and technology policy. She currently works at Ambient.ai, where she analyzes and evaluates real-time computer vision systems and supports safety-focused deployment for enterprise clients. Previously, she managed AI safety research at MATS, conducted AI and autonomous vehicle reliability research at NCAT’s AiViCar Lab, and served as a Policy Data and Visualization Analyst at National Journal.
Krystal Jackson brings deep experience in AI safety research, risk management, and governance. She is currently a Non-Resident Research Fellow at UC Berkeley’s Center for Long-Term Cybersecurity, where she works on AI security and risk management for general-purpose and frontier AI systems, including research cited by NIST. She has contributed to national and international AI safety and standards-setting efforts and previously served in government roles focused on AI policy and cybersecurity.
Furthermore, we are putting together an advisory board. The current members of our advisory board are:
Ryan Kidd is the Co- Executive Director of MATS, a board member and cofounder of the London Initiative for Safe AI (LISA), and a Manifund regrantor. Previously, he completed a PhD in Physics at the University of Queensland (UQ).
Jonas Kgomo is the founder and director of the Equiano Institute, a grassroots organization focused on AI alignment and governance in Africa. He is also a research collaborator with the Harvard Ash Center’s GETTING-Plurality Research Network and a research engineer at the University of Cambridge. His work spans AI governance, responsible AI deployment, and technical research, with a particular focus on how emerging AI systems interact with societal and institutional contexts in the Global South.
Early funding is essential to BASE’s ability to launch its first cohort and build the structures needed for long-term success. The first $20–65K we receive would give our team the runway to remain entirely focused on preparing the inaugural fellowship and supporting the early development of our Research Working Group. This initial support enables BASE to operate with greater stability, increasing our chances of securing the full funding needed to run a full version of the program. It also allows us to develop early Minimum Viable Products (MVPs), such as curriculum modules, mentorship frameworks, and research coordination systems, to strengthen the fellowship before launch.
If total funding falls short of our 6-month goal (approximately $350K) but reaches at least $260K, we would still run the fellowship, but the program may need to be modified in several ways:
• Reducing the cohort size.
A smaller cohort (e.g., 10–15 fellows instead of 30) would reduce stipend costs and lighten the mentoring load. While BASE would still deliver an intense experience, a smaller cohort limits the number of Black scholars entering the field at a moment when representation is deeply needed in AI safety, governance, security, and ethics.
• Reducing staff hours or delaying hiring.
With partial funding, the founding team would reduce paid hours in the short term, which may slow program development and limit how quickly we can build partnerships, coordinate mentors, or expand the Research Working Group.
• Delaying or scaling down tools that support instruction and research.
For example, implementing our learning management system (Canvas) or expanding Airtable research tracking may need to wait until additional funding is secured. These tools improve structure, accessibility, and research coordination, and delaying them would reduce efficiency for both fellows and staff.
These trade-offs would not prevent BASE from running the program, but they would affect the scope and depth of the experience we can provide. With full funding, we can offer a program that preserves the essential elements of the fellowship: high-quality instruction, consistent mentorship, structured research opportunities, and a supportive pathway into careers in AI safety, governance, and security.
What are the most likely causes and outcomes if this project fails?
Although we already have strong interest from several mentors, advanced AI safety expertise is concentrated in a relatively small community. If we cannot consistently match fellows with mentors whose expertise aligns with their research direction, the quality of fellows’ projects and their long-term readiness may suffer.
How BASE reduces this risk:
We are proactively developing a broad mentor network across AI Alignment, governance, security, and ethics, including researchers from MATS, BlueDot, academia, relevant nonprofits, and industry labs. We have already received over 50 mentor applications and are actively recruiting additional specialized expertise through targeted outreach, including the MATS mentorship list and LinkedIn. We also diversify mentorship by using both primary and secondary mentors, ensuring each fellow receives consistent guidance across specialized areas.
If the training materials, mentorship support, or research scoping are insufficient, some fellows may struggle to produce work rigorous enough to gain recognition or advance to roles in AI safety, governance, security, or ethics.
How BASE reduces this risk:
Our 4-week structured curriculum (ARENA, BlueDot, Oxford, and AI Security Bootcamp Materials) ensures that fellows build a foundational understanding before beginning their research projects. Fellows receive close guidance from mentors in scoping research questions, iterating on ideas, and producing work aligned with community standards. We also provide writing support, regular check-ins, and opportunities for peer feedback.
Running a larger fellowship or multiple cohorts per year requires strong operational systems, a growing mentor network, and sustained funding. Without these, BASE may face bottlenecks that limit its ability to increase impact.
How BASE reduces this risk:
We are investing early in repeatable processes, tracking systems (Airtable), a learning management system (Canvas), and research coordination structures. We are also building relationships with funders and partners who can support long-term growth, not just a single cohort. Our Research Working Group helps drive year-round engagement, strengthening community retention and readiness to scale.
This may require reducing cohort size, staff hours, or program components, thereby limiting the depth of support we can offer fellows.
How BASE reduces this risk:
We are actively engaging funders early, pursuing a diversified funding portfolio (Manifund, Coefficient Giving, individual donors, corporate philanthropy), and preparing clear MVPs and measurable outputs that demonstrate early traction. We can also scale the program in a modular way rather than cancel it entirely.
If the fellowship fails to provide meaningful research progress, connections, or career advancement, fellows and mentors may conclude that their time would have been better spent elsewhere.
How BASE reduces this risk:
We emphasize structured research support, high-touch mentorship, exposure to the AI safety ecosystem, and career navigation to roles, internships, and advanced fellowships, including MATS, Astra, ERA, and others. We also prioritize clear communication and expectation setting at the outset of the program.
Long-Term Organizational Goals
BASE’s long-term vision is to strengthen the AI safety ecosystem by addressing persistent gaps in both research capacity and talent pipelines. The Fellowship is designed to supply the AI safety community with the skilled researchers, practitioners, and professionals it will need over the next three to five years. By preparing fellows to engage with safety, security, governance, and evaluation work, the Fellowship helps build a cohort of contributors who can influence how safety benchmarks are developed, how security risks are assessed, and how governance frameworks are shaped within the AI safety landscape and beyond.
Motivated by the argument that there are too few AI safety research organizations and too narrow a set of contributors shaping the field, BASE has long-term goals to establish a robust Research Arm, currently operating as the Research Working Group. This group will conduct independent AI safety research with support from professionals within the BASE community and from some of our most promising Fellows as they progress beyond the Fellowship. Over time, the Research Arm is intended to become a self-sustaining research group that publishes high-quality work and is supported through industry and government contracts.
While the current request to the Manifund community focuses on launching the Fellowship as an entry point for technical professionals pivoting into AI safety, BASE’s scope extends beyond training alone. In the near term, the Research Arm will leverage the expertise of the broader BASE community to support the quality and rigor of Fellows’ research outputs. The Research Director will oversee research lifecycles by pairing projects with internal and external reviewers and establishing processes that guide work from initial concept through refinement, publication, and dissemination.
Together, the Fellowship and the Research Arm create a pipeline from early engagement to sustained contribution. The Fellowship expands the pool of prepared talent entering AI safety, while the Research Arm provides a long-term home for research, collaboration, and influence. This integrated approach positions BASE to contribute both people and knowledge to the AI safety ecosystem, strengthening its capacity to address emerging risks over time.
Fundraising plans
We are in the process of applying to LTFF, SFF, and Coefficient Giving.
We plan to partner with industry partners aligned with our AI safety goals.
For ethics-focused programming, we will pursue separate fundraising so that funds raised here can be directed to x risk-focused AI safety and EA-aligned work.