The AI Action Summit in France could play an important role in building upon the safety agreements of the Bletchley Park and Seoul Summits, or it could fail to make any meaningful contributions to safety. I'm not personally familiar with the team but it seems to have good experience with French AI governance.
Hiring AI Policy collaborators to prepare France AI Action Summit
I am publishing this funding request on behalf of Tom David, with his authorization. I will collaborate with him on this project but will not receive any funding myself.
Project summary
6 months salary for 2 collaborators on AI governance in France ahead of the AI Action Summit.
What are this project's goals? How will you achieve them?
The objective of this project is to amplify my capacity for action in France and internationally, to participate in the creation of robust governance mechanisms, and to work towards establishing conditions for solid international coordination.
On February 10th and 11th, France will host the AI Action Summit - a global AI summit following Bletchley Park and the AI Seoul Summit - where governments could make substantial commitments. Through this funding, I could form a small team to respond to more requests, write more articles, and avoid having to miss high-impact opportunities due to lack of bandwidth.
By working to ensure that AI functioning, trends, present and emerging risks, ethical questions, and governance issues are faithfully presented to decision-makers, we aim to expose them to these challenges and inform them with reliable data (and, where relevant, propose recommendations).
France plays an important role in the trajectory of global AI governance, and efforts to participate in creating conditions for informed decision-making are still insufficient. This small team would focus its efforts in this direction.
How will this funding be used?
You can find the detail expense breakdown in this spreadsheet: Budget Breakdown - Tom AI Policy team
Who is on your team? What's your track record on similar projects?
Contributions to technical notes for decision-makers, to articles, to more general public notes, to the content of a course on AI that reached over 300k people, to high-level discussions, to discussions with researchers and key decision-makers in the French AI ecosystem. Exemple:
Author: AI & Foresight: Envisioning the Future When the Unthinkable Becomes Possible https://www.institutmontaigne.org/en/expressions/ai-foresight-envisioning-future-when-unthinkable-becomes-possible
Contibution to a note: Investing in safe and trustworthy AI https://www.institutmontaigne.org/en/publications/investing-safe-and-trustworthy-ai-european-imperative-french-opportunity
Participation to a note: proposing a French AI authority (which led to the creation of the French AI Safety Institute) https://www.institutmontaigne.org/publications/pour-une-autorite-francaise-de-lia
PRISM Eval
Co-founder & Director of Governance at PRISM Eval https://www.linkedin.com/company/prism-eval/
Other
I participated in the AI Global Forum and the AI Seoul Summit. These events allowed me to discuss AI issues and the need for evaluation with high-level stakeholders. https://www.linkedin.com/posts/tom-david-a33444259_the-ai-seoul-summit-has-just-concluded-activity-7199008214573563904-KEIf
I gave a lecture on AI Security, Safety and Robustness in Sciences Po https://www.linkedin.com/posts/tom-david-a33444259_heureux-davoir-eu-lopportunit%C3%A9-daborder-activity-7173018269472251904--oib
I organized high-level discussion with Yoshua Bengio https://www.linkedin.com/posts/tom-david-a33444259_hier-%C3%A0-linstitut-montaigne-nous-avons-activity-7120475120606830593-bQIx or Jack Clark https://www.linkedin.com/posts/tom-david-a33444259_hier-%C3%A0-linstitut-montaigne-nous-avons-activity-7168238710486982657-P40V
I participated in the Korea-France Business Dialogue for our Future with the Korean Ministry of Trade, Industry and Energy, the French Ambassador to Korea, members of MEDEF International, and the Federation of Korean Industries to give a short presentation to explain the critical need of AI Evaluation https://www.linkedin.com/posts/tom-david-a33444259_today-i-had-the-honor-of-participating-in-activity-7199421491782791168-0d4X
I'm an advisor for a top French entrepreneur on AI.
I’m a member of the CEN and CENELEC Standardization Committee on Artificial Intelligence (JTC21)
MATS Research Scholar (6th cohort) - Governance and Evaluation
I contributed to launching an organisation that organized ML4Good and hackathons, etc. https://www.ml4good.org/
What are the most likely causes and outcomes if this project fails?
[Coming soon]
What other funding are you or your project getting?
Currently, this project is not receiving any other funding.
I asked for funding from another funder, who is currently evaluating the project.
If we find funding from sources other than Manifund, we'll use this money to either hire more team members or keep our current team working for an extra 6 months. This will give everyone more job security and better working conditions.
Austin Chen
3 months ago
I don't know Tom, but a couple weeks ago I spoke with Lucie on the importance and neglectedness of AI governance in France. I hadn't realized that the next version of the AI Safety Summit was going to happen in France; this seems like a great reason to invest in the folks doing on-the-ground work there. In that conversation, Lucie made a strong endorsement of Tom's prior work; here I'm delegating my trust to Lucie.