This Manifund Project had a high minimum funding amount, a short fuse, and very low public visibility. Is there another way to support CeSIA, since I was unable to do so via this Project?
Project summary
A center for AI safety in Paris, established in May 2024 by EffiSciences, aiming to promote a responsible development of AI. The center will focus on:
Advocacy: raise awareness about AI safety.
R&D: conduct technical projects in partnership with organizations responsible for implementing the EU AI Act, as well as other key players within the ecosystem.
Field-building: train researchers and engineers, and support policymakers.
What are this project's goals and how will you achieve them?
Our main mission is to shape public and policy discussions around AI safety by engaging with the ecosystem in several complementary ways:
Policy outreach and support: engaging with key policymakers, sharing insights, writing policy briefs, and building collaborations with relevant institutions.
Public awareness: writing op-eds and articles, organizing events, and giving interviews.
R&D: engaging in technical projects, such as developing an open-source platform for benchmarks in partnership with startups and public institutions.
Education and field-building: nurturing our AI safety talent pipeline through university courses, bootcamps, our textbook, online programs, and mentoring.
How will this funding be used?
Budget for 12 months of operations:
$212k: Salaries for 6 FTEs (5 full-time staff and 2 part-time):
Executive director (already 50% funded)
Head of research (already funded)
Head of policy (already funded)
Head of operations
Head of strategy
Scientific director (part-time)
Media and communications expert (part-time)
$74k: Programs
$50k: R&D grants and internships
$24k: Talks, round tables and workshops
$52k: General expenses
Offices
Subscriptions, equipment, transport, compute
Who is on your team and what's your track record on similar projects?
Charbel-Raphaël Ségerie: Executive director. Charbel has been coordinating most of EffiSciences’ AI activities, teaches an accredited AI safety programs in the top French research university, kickstarted and facilitates the ML4Good bootcamps, and creates content such as articles and an AI safety textbook. (LessWrong profile)
Alexandre Variengien: Head of research. Alexandre is an independent researcher who has previously interned at Redwood Research as research manager for the REMIX program, and has done his master’s thesis at Conjecture. He was second author on the Circuit for Indirect Object Identification paper (LessWrong profile)
Florent Berthet: Head of operations. Florent is currently EffiSciences’ executive director, and previously co-founded and ran EA France.
Manuel Bimich: Head of strategy. Manuel has been involved with EffiSciences' AI division since its early days.
Vincent Corruble: Scientific director. Vincent is Associate Professor at Sorbonne University and is a regular visiting researcher at CHAI.
Track record:
We have been doing AI safety field-building in France for two years with good success, reaching 1,000+ students and orienting more than 30 people towards AI safety careers. Our ML4Good bootcamps have now been replicated in several countries, and our textbook is already being used by several groups. You can find more detail on our LessWrong post from last year.
We have recently started building collaborations with multiple organisations to develop tools that might eventually be used to implement the AI Act. These organizations have shown strong interest in our work, and collaborating with them will help us gain us credibility among key private and public stakeholders.
While public advocacy was not a priority for us previously, it will be one of our core activities moving forward. We are rapidly acquiring experience in this area, and have already begun establishing partnerships with leading AI journalists in France. For example, we recently published an op-ed supported by Yoshua Bengio in a major French newspaper.
What are the most likely causes and outcomes if this project fails? (premortem)
Likely causes:
Insufficient engagement or resistance from key AI actors and policymakers due to ideological differences or bad economic incentives.
Inability to secure adequate funding and talent, which is essential to reach a critical mass that would, in turn, attract additional resources and skilled people. Being able to attract people with sufficient experience is especially important for our policy-focused work, but it is challenging to find candidates who are both deeply knowledgeable about the subject and well-connected within the policy ecosystem.
Potential outcomes:
Limited impact on shaping public and policy discourse on AI safety, potentially resulting in France adopting positions that undermine international coordination efforts.
Polarizing the public discourse. The fields of AI ethics and AI safety are somewhat divided, and we are seeing sparks of this happening in France. By inviting experts from different AI fields and with different beliefs to discuss (e.g. during round tables and panels, as we are currently doing), we aim to promote a healthier debate and foster positive relationships between AI actors in France.
What other funding are you or your project getting?
We have already raised $150k for this project. To see how we will use that budget, check the "already funded" mentions in the "How will this funding be used?" section.
Florent Berthet
5 months ago
@Haiku Hi Nathan, yes you can support CeSIA by giving to EffiSciences (CeSIA's current legal structure until we register it as a separate non-profit) using the following link : https://www.helloasso.com/associations/effisciences/formulaires/1/en
If your name appears in your donation, I will know to allocate all the funds to CeSIA. If not, feel free to reach out to me at florent@securite-ia.fr to confirm how much and when you donated.
And thanks a lot for your support, this is really helpful! ❤️
Nathan Metzger
5 months ago
The AI Action Summit will be held in France in February, which makes France a strategically valuable country for communication about AI risk. I believe efforts there are severely underfunded with respect to their potential impact.
Neel Nanda
6 months ago
Advocacy, R&D, and field-building seem like very different things for such a small and new org to be trying to do at once. Why did you make this decision, and how concerned are you about being spread too thin?
You also might want to add to his bio that Alexandre was second author on the Indirect Object Identification paper, which I think was great work.
Florent Berthet
6 months ago
@NeelNanda Great point, agreed. We may eventually need to narrow down our missions if we feel overextended. However, for now it seems worthwhile to explore multiple approaches. This will give us a better sense of which initiatives are most impactful and which can be dropped if necessary.
We decided to engage on these activities because 1) they all seem pretty important if we want to nudge France towards more safety, 2) they are complementary (more below), and 3) we have the support of 15+ dedicated volunteers, which gives us more bandwidth to pursue several missions. To give more detail for each activity:
Field-building: we are confident we can continue doing a good job here because we have been doing this for two years at EffiSciences. The pipeline is now working well and we know how much time it requires from our core team. These field-building activities help us recruit volunteers and seem very much worth keeping.
R&D: we have a few strong technical people who are focused on a specific project — the Benchmarks for the Evaluation of LLM Safeguards (BELLS) mentioned in this page —, which doesn't require many people to produce useful outputs. This project can be scaled by onboarding new people to work on additional benchmarks for various threat models, but if we are too constrained on funding or staff, we will just keep the focus on a few benchmarks. This R&D work is instrumental to policy outreach by helping us make connexions with public institutions that are interested in our tools and expertise.
Advocacy:
Policy-wise, we have already started discussing with a few public institutions to collaborate on our technical work, but we have yet to engage on advocacy per se. This will be a mission of the policy role we are recruiting for.
Regarding public awareness, we have a small team consisting of one staff member at approximately 0.5 FTE and 3-4 volunteers. They have listed various strategies to implement, some of which we have already begun, such as collaborating with journalists, publishing our newsletter, and posting on social media. In the coming weeks, we should gain a clearer understanding of the effectiveness of each approach. Here again, if we feel spread too thin, we could scale back some activities