4

AI Alignment Research Lab for Africa

ActiveGrant
$2,450raised
$120,000funding goal

Project summary

The Equiano Project is a new AI research lab that will be located in Africa. The lab's mission is to cultivate technical expertise to advance AI safety research, economic impacts, and policy frameworks. The lab will focus on researching AI alignment, specifically in the context of low-resource natural language processing (NLP), policy, and economic frameworks.

website: https://www.equiano.institute

Project goals

The Equiano Project has the following goals:

  • To train and cultivate AI Safety scholars in Africa

  • To develop indigenous interpretable models, policy frameworks, and highlight the economic impacts of AI in Africa

  • To expand access and collaboration of AI Safety Research in emerging markets, ensuring that diverse perspectives and voices are included in shaping AI policies and practices

  • To advance mechanistic interpretability techniques and evaluation specifically tailored to African AI models and data sets, enhancing transparency and accountability in AI systems deployed in the region

  • To empower Equiano Scholars with the skills and knowledge necessary for technical alignment and policy work, enabling them to contribute effectively to AI governance efforts

  • To investigate the potential productivity gains, automation opportunities, and economic impacts that AI can bring to various sectors in Africa, guiding policymakers and stakeholders in leveraging AI for sustainable development

Concrete steps to achieve those goals

  • Train and cultivate AI Safety scholars in Africa: The lab will offer a variety of training programs and research opportunities for African scholars. These programs will cover the fundamentals of AI safety, as well as the latest research in the field. The lab will also provide mentorship and support to help scholars develop their careers in AI safety.

  • Develop indigenous interpretable models, policy frameworks, and highlight the economic impacts of AI in Africa: The lab will develop AI models that are tailored to the needs of Africa. These models will be interpretable, meaning that they will be understandable to humans. The lab will also develop policy frameworks that promote the responsible development and use of AI in Africa. The lab will also conduct research on the economic impacts of AI in Africa.

  • Expand access and collaboration of AI Safety Research in emerging markets: The lab will partner with local and international research institutions, universities, governments, and industry stakeholders. These partnerships will help to ensure that the lab's research is relevant to the needs of Africa and that it has a real impact on the continent. The lab will also host workshops and conferences to promote collaboration in the field of AI safety.

  • Advance mechanistic interpretability techniques and evaluation specifically tailored to African AI models and data sets, enhancing transparency and accountability in AI systems deployed in the region: The lab will develop new techniques for evaluating the interpretability of AI models. These techniques will be tailored to the needs of Africa, where data sets are often small and noisy.

  • Empower Equiano Scholars with the skills and knowledge necessary for technical alignment and policy work, enabling them to contribute effectively to AI governance efforts: The lab will provide training and mentorship to Equiano Scholars to help them develop the skills and knowledge necessary for technical alignment and policy work. The lab will also provide opportunities for Equiano Scholars to get involved in AI governance efforts.

  • Investigate the potential productivity gains, automation opportunities, and economic impacts that AI can bring to various sectors in Africa, guiding policymakers and stakeholders in leveraging AI for sustainable development: The lab will conduct research on the potential economic impacts of AI in Africa. This research will help policymakers and stakeholders to understand how AI can be used to promote sustainable development in Africa.

How will this funding be used?

The funding for the Equiano Project will be used to cover the following costs:

  • Salaries for lab staff

  • Research expenses, such as publishing and data costs

  • Equipment and software

  • Conferences and workshops

  • Outreach and dissemination

What is your (team's) track record on similar projects?

Advisors

  • Tyna Eloundou: Tyna is a researcher at OpenAI and a member of the 2020 cohort of OpenAI research scholars. She has published research on undesired content detection in the real world and the labor market impact potential of large language models. She has also worked on model safety and misuse, systemic risks and economic impacts of AI among others.

  • Cecil Abungu: Cecil conducts research around AI risk with a special focus on issues faced by the global south. He works with the AI:FAR team on projects related to how AI could lead to extreme inequality and power concentration. To further his research and build his knowledge in long termism and AI risk, Cecil received support from Open Philanthropy's early career funding for individuals interested in improving the long-term future.

Team

  • Joel Christoph: Joel is a Ph.D. Researcher in Economics at the European University Institute (EUI) with a complementary background in policy and political science at Tsinghua University and the Carnegie Endowment. As a former Research Fellow at Oxford's FHI. His extensive international exposure, and experience in leadership roles, such as Vice-Curator of the World Economic Forum (WEF) Global Shapers, further underscore his ability to navigate complex policy landscapes and drive strategic economic initiatives in diverse contexts.

  • Jonas Kgomo: Jonas is a BSc in Mathematics( Istanbul University) and MSc Computer Science (Sussex University) graduate who has experience working on early-stage software companies. Entrepreneur First 2022 cohort and previously launched a Progress Studies overlay journal focusing on the progress we make as a civilisation in technology, science and policy.

  • Claude Formaken: Claude is a Research Engineer at InstaDeep Ltd, an AI company , while pursuing a PhD in multi-agent reinforcement learning in the University of Cape Town, South Africa. Claude is both excited and concerned by the transformative impact advanced Artificial Intelligence (AI) will have on Africa and the world at large. He is passionate about this and works on AI Safety initiatives in Africa.

  • JJ Balisanyuka-Smith: JJ graduated with Honors in Cognitive Science and Math at Swarthmore College, PA. His research interests are focused on the areas of Machine Learning Theory, AI Safety, and AI Compression. He worked on early research at Cohere AI. He is an Alumni of the Sutton Trust/ Fulbright US program and regularly volunteers with the program.

How could this project be actively harmful?

We are a responsible innovation lab, we design frameworks that ensure that humans are placed first in the design process of our R&D solutions. We think in a very unlikely cases this could be misused to find out which individuals are being discriminated by technology and malicious players could focus on increasing disparities.

What other funding is this person or project getting?

We are currently not funded.

Jonas avatar

Jonas Kgomo

6 months ago

Progress update

What progress have you made since your last update?

The Equiano Institute is a responsible AI research lab for Africa and the Global South. Here are our updates for the past 6 months:

Capacity Building for UN Diplomats
We have assisted the UN in preparation towards the Global Digital Compact, through consulting and training UN Diplomats. We also consult for the UNDP, UNICEF etc on issues related to AI in Africa. We are contributing members of the Harvard Ash Center research network on Getting Plurality.

Capacity
We have run a Fellowship on AI governance and technical alignment.

Research Agendas:

  • Policy Impacts of AI: We publish research on AI policy development and technical safeguarding of digital public infrastructure.

  • Economic Impacts of AI: How does AI affect individuals and communities in the Low and Middle Income Countries

Open-Source Projects 
We are currently supporting OpenAI in developing multilingual language models for improving AI performance in African Languages.  

Coalitions
We currently spearhead coalition initiatives through two working groups.

  • The AI Working Group directly addresses challenges faced in the region by promoting responsible AI practices and tackling societal implications, focusing on issues like data access, governance, and participation in AI development.

  • The Africa Oversight (TAO) is a technically-focused initiative focused specifically on ensuring the safe and ethical development of AI technologies in Africa. TAO utilizes multifaceted evaluation methods and aims to develop a comprehensive framework for best practices in ethical AI development and deployment within the African context.

The goal is to promote transparency and accountability in the use of artificial intelligence (AI) in the public sector. 

These combined efforts address the specific needs of Africa’s responsible AI trajectory.

Thank you for your time and consideration.We have published peer-reviewed papers in conferences such as Data for Policy, NeurIPS and others.

What are your next steps?

Global AI Benchmarks
We are building a more Plural MMLU (this is a normative approach to evaluations that captures contextual differences between nuances in different languages and cultures).
Government DPI AI Safety
We believe that our project on multilingual African language can be a core part of government services, as various governments integrate AI into their government infrastructure as we have seen recently: 

Research Themes
We have research themes on Health, Development and AI this programme looks at the intersection of explainable AI in the development sector. Research we are excited about: 

  • Development Impacts: We have research themes on Health, Development and AI. This programme looks at the intersection of explainable AI in the development sector.

  • Social Impacts:  this considers how public good AI interse normative and social aspects of AI Readiness/Strategy Frameworks

Is there anything others could help you with?

  • Funding for researchers contributing to our projects. Currently all our research is on volunteer basis.

  • Research collaborations are welcome

RyanKidd avatar

Ryan Kidd

6 months ago

Any update on the lab progress?

Jonas avatar

Jonas Kgomo

6 months ago

Dear @RyanKidd,
Thank you very much, I have updated our progress above!

Rachel avatar

Rachel Weinberg

about 1 year ago

@vincentweisser @pegahbyte @jajsmith after talking to Jonas, I moved this project back into the proposal stage and lowered the minimum funding to $500. You probably got an email before that your donation offers were declined because this project hadn't reached the minimum funding, but now that won't be true. Let me know if you want to delete your offers for any reason, but otherwise, as soon as we approve this project your donations will go through as though the minimum had been $500 instead of $1000 originally.

donated $400
vincentweisser avatar

Vincent Weisser

about 1 year ago

glad to hear and awesome to see this initiative!

Jonas avatar

Jonas Kgomo

over 1 year ago

Some updates after 30 days of launching the project without any funds:
- we have more than 40 researchers, including a few high school students interested in ai governance and alignment
- 5 research projects with an average count of 3 -10 researchers per project (researchers are mostly from META, MIT, Harvard, Open AI and Cornell).
- we have been timely since African nation states are getting keen about AI reception in Africa
- collaboration with GovAI, BlueDot on the fellowship
- organising a zero-cost online fellowship
- we are submitting our first paper on Factored Cognition of LLMs using causal influence diagrams,
- we have world-class PhD mentors leading projects at Equiano Institute

I would like to tag people who have regranted us before Manifund: @LeopoldAschenbrenner @LinchuanZhang
And people we have engaged with in the past few weeks @RenanAraujo
Thank you, would appreciate any thoughts.
@Austin

Jonas avatar

Jonas Kgomo

over 1 year ago

pinging to share with you on this final day pre-archival
@joel_bkr @GavrielK @IsaakFreeman @AntonMakiievskyi

joel_bkr avatar

Joel Becker

over 1 year ago

Hi Jonas! Unfortunately, I don't feel like I'm well-positioned to evaluate your project relative to other regrantors (or funders). My comparative advantage is in having a wide professional network of people who know their stuff, not context on AI research labs.

Jonas avatar

Jonas Kgomo

over 1 year ago

@joel_bkr Thanks Joel, appreciate your connecting us with your network

GavrielK avatar

Gavriel Kleinwaks

over 1 year ago

Hi Jonas, similarly to Joel, I am not well-versed in AI and don't feel I have an informed assessment of your project--I am primarily focusing on biosecurity and policy.

AntonMakiievskyi avatar

Anton Makiievskyi

over 1 year ago

The one goal that may lead to significant impact in my opinion is : "Train and cultivate AI Safety scholars in Africa" - if it has a chance of scouting people who can contribute to "AI notkilleveryoneism". Not sure if establishing a lab in Africa is the most efficient path to that. Probably scholarships or targeted reach out would be more effective

Jonas avatar

Jonas Kgomo

over 1 year ago

Agreed, we have seen different failure modes and talent agglomeration that are unique to Africa and that might be absent in other AGI hubs. The training and capacity building is a special and main part of the project, we want to launch a broader talent search program. So far we have been doing targeted outreach and community building, which is less intensive than launching a lab. We are using the approach of grassroots research lab, which has proven to be effective in Africa for projects like Masakhane NLP, Cohere For AI, Deep Learning Indaba [https://www.youtube.com/watch?v=qSCQ2Tv3YmA]

We have seen limits to global inclusion in AI development [https://arxiv.org/abs/2102.01265] , hence opinions represented by the models are less similar to participants in Africa [https://llmglobalvalues.anthropic.com] . We aim to have an interdisciplinary community capable of advising on AI governance and work on technical AI alignment research (especially for reducing disparities, bias and suffering "s-risk" caused by AI)