London Initiative for Safe AI (LISA) Funding
Project summary
London stands out as an ideal location for a new AI safety research centre:
Frontier Labs: It is the only city outside of the Bay Area with offices from all major AI labs (e.g., Google DeepMind, OpenAI, Antropic, Meta)
Concentrated and underutilised talent (e.g., researchers & software engineers), many of whom are keen to contribute to AI safety but are reluctant or unable to relocate to the Bay Area due to visas, partners, family, culture, etc.
UK Government connections: The UK government has clearly signalled their concern for the importance of AI safety by hosting the first AI safety summit, establishing an AI safety institute, and introducing favourable immigration requirements for researchers. Moreover, policy-makers and researchers are all within 30 mins of one another.
Easy transport links: LISA is ideally located to act as a short-term base for visiting AI safety researchers from the US, Europe, and other parts of the UK who want to visit researchers (and policy-makers) in companies, universities, and governments in and around London, as well as those in Europe.
Regular cohorts of the MATS program (both scholars and mentors) because of the above (in particular, the favourable immigration requirements compared with the US).
Despite this favourable setting, so far little community infrastructure investment has been made. Therefore, our mission is to build a home for leading AI safety research in London by incubating individual AI safety researchers and small organisations. To achieve this, LISA will:
Provide a research environment that is supportive, productive, and collaborative, where a diversity of ideas can be refined, challenged, and advanced;
Offer financial stability, collective recognition, and accountability to individual researchers and new small organisations;
Cultivate a london home for professional AI safety research by leveraging London's strategic advantages and building upon our existing ecosystem and partnerships;
Foster epistemic quality and diversity amongst new AI safety researchers and organisations by facilitating mentorship & upskilling programmes and encouraging exploration of numerous AI safety research agendas.
LISA stands in a unique position to enact this vision. In 2023, we founded an office space ecosystem, which now contains organisations such as Apollo Research, Leap Labs, MATS extension, ARENA, and BlueDot Impact, as well as many individual and externally affiliated researchers. We are poised to capitalise on the abundance of motivated and competent talent in London and the supportive environment provided by the UK government and other local organisations. Our approach is not just about creating a space for research; it is about building a community and a movement that can significantly improve the safety of advanced AI systems.
We have been open since September 2023. In that time:
In addition to our member organisations, we are also home to remote-working AI safety researchers affiliated with the University of Oxford, Imperial College London, University of Cambridge, University of Edinburgh, MILA, Metaculus, Conjecture, Anthropic, PIBBS, and Lakera (amongst others), as well as several funded Independent researchers.
Recent AI safety papers featuring authors primarily working from the LISA offices include Copy Suppression: Comprehensively Understanding an Attention Head, How to Catch an AI Liar: Detection in Black-box LLMs by Asking Unrelated Questions, Sparse Autoencoders Find Highly Interpretable Features in Language Models, The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A" and Taken out of context: On measuring situational awareness in LLMs.
LISA Residents have helped develop new AI safety agendas as part of the MATS extension program, including sparse autoencoders for mechanistic interpretability, conditioning predictive models, developmental interpretability, defining situational awareness, formalising natural abstractions, and causal foundations.
LISA Residents have been hired by leading external organisations to do AI safety research including Anthropic, Google DeepMind, the UK AI Safety Institute (formerly the UK Frontier AI Taskforce), and various leading universities.
What are this project's goals and how will you achieve them?
Our Vision
Our mission is to be a professional research centre that improves the safety of advanced AI systems by supporting and empowering individual researchers and small organisations. We do this by creating a supportive, collaborative, and dynamic research environment that hosts members pioneering a diversity of AI safety research.
In the next two years, LISA’s vision is to:
Be a premier AI safety research centre which has housed significant contributions to AI safety research, due to its collaborative ecosystem of small organisations and individual researchers
Have supported the maturation of member organisations by increasing their research productivity, impact, and recognition.
Have positively influenced the career trajectories of LISA alumni, who will have transitioned into key positions in AI safety across industry, academia, and government sectors as these opportunities emerge and develop over time. Some of these would otherwise have been pursuing non-AI safety careers. Alumni will maintain links with LISA and its ecosystem, e.g., in the form of research collaborations & mentoring, speaking, and career events.
Have advanced a diversity of AI safety research agendas and will have uncovered novel AI safety research agendas that significantly improve our understanding of how and why advanced AI systems work or our ability to control and align them.
Have nurtured new AI safety talent and organisations by serving as a nurturing ground for new, motivated talent entering the field, positioning itself as a pivotal entry point for future leaders in AI safety research and new impactful AI safety organisations.
Our Plans
We will focus on activities to yield four outputs:
1) Provide a research environment that is supportive, productive, and collaborative with an office space that is a “melting pot” of epistemically diverse AI safety researchers working on collaborative research projects and LISA will offer comprehensive operational and research support as well as amenities such as workstations, meeting rooms & phone booths, and catering (including snacks & drinks).
2) Offer financial stability, collective recognition, and accountability to individual researchers and new small organisations by subsidising office and operations overhead, providing fiscal sponsorship of new AI safety organisations, offering Legal & immigration support, and granting annual LISA Research Fellowships to support and mature individuals who have already shown evidence of high-impact AI safety research (as part of the MATS Program, Astra Fellowship, a PhD and/or postdocs, or otherwise).
3) Cultivate a london home for professional AI safety research by admitting new member organisations and LISA Residents based on a rigorous selection process (relying on the advisory board) based on alignment with LISA’s mission, existing research competence, and cultural fit. We will host prominent AI safety researchers as speakers, hold workshops, and host other professional AI safety events and strengthen our partnerships with similar centres in the US (e.g., Constellation, FAR AI, and CHAI), UK (e.g., Trajan House) and likely new centres elsewhere, as well as with the UK Government’s AI safety institute and AI safety researchers in industry.
4) Foster epistemic quality and diversity amongst new AI safety researchers & organisations by seasonally hosting established and proven domain-specific mentorship and upskilling programmes such as MATS and ARENA.
These outputs advance our theory of change depicted as a causal graph here.
For more information about LISA’s strategy, please visit LISA’s Strategy Overview.
How will this funding be used?
To fully execute our mission, LISA is expected to cost $3.7M for the next 12-months.
Although 12 months of funding is our minimum funding ask, long-term stability is incredibly important for LISA’s members, so we present everything as expected costs per year.
All amounts are given in USD. However, LISA is located in London, so our expenses are paid in GBP. We have used a conversion of 1 GBP : 1.27 USD throughout (correct at the time of writing), but exact USD costs are subject to exchange rate fluctuations.
With a smaller amount of funding, we can still execute certain parts of our theory of change, so we present a three-tiered funding proposal.
Tier 1: “Basic” = LISA staff wages + minimal office functions for an office with capacity for 120 researchers. Expected cost = $1.22M/year (so ~$10k per AI safety researcher for 12 months)
Tier 2: “Core” = Tier 1 + subsidised office space for individual researchers and new organisations. Expected cost = $2.28M/year
Tier 3: “Core + Fellowships” = Tier 2 + LISA Research Fellowship funding. Expected cost = $3.74M/year (This assumes 10 Research Fellows at a cost of ~$100k per fellow per year)
For more budgetary details, please reach out to the LISA team.
Who is on your team and what's your track record on similar projects?
Researchers like it at LISA:
Testimonials - Member Organisations
Testimonials - Individual Researchers
Testimonials - Visiting Researchers
LISA Team
LISA's Leadership team and Advisory Board bring experience in managing communities, selecting talented researchers, and conducting impactful technical AI safety research.
Mike Brozowski - Operations Director - mike@safeai.org.uk
Mike is an operations professional with over seven years experience in senior leadership roles including high-growth, early-stage companies. Mike managed the business operations of the MATS program in London and co-founded LISA. Before his involvement with AI safety, Mike led the operations for a number of FinTech firms and was responsible for the management and integration of internal operations.
James Fox - Research Director - james@safeai.org.uk
James is Research Director. He co-leads LISA and oversees research prioritisation and strategy. He is currently writing up his PhD (Computer Science, University of Oxford) on technical AI safety, supervised by Tom Everitt (Google DeepMind) and Michael Wooldridge & Alessandro Abate (Oxford), which focused on game theory, causality, reinforcement learning, and agent foundations.
Ryan Kidd - Non-Executive Director (unpaid) - ryan@safeai.org.uk
Ryan is Co-Director of MATS, a Board Member and Co-Founder of the London Initiative for Safe AI (LISA), and a Manifund Regrantor. Previously, he completed a BSc (Hons) and PhD in Physics at the University of Queensland (UQ), where he ran UQ’s Effective Altruism student group for three years, tutored physics courses, volunteered in academic, mental health, and ally roles, and helped organize the UQ Junior Physics Odyssey.
Christian Smith - Non-Executive Director (unpaid) - christian@safeai.org.uk
Christian is Co-Director of MATS and a Board Member and Co-Founder of the London Initiative for Safe AI (LISA). Previously, he studied particle physics and pedagogy at Stanford University, worked in operations at multiple organizations including Lightcone Infrastructure, performed research at CERN, and organized educational programs like the Uncommon Sense Seminar.
Joe Murray - Operations Lead - joe@safeai.org.uk
Joe is Operations Lead. He manages the day to day operations of LISA’s research centre. Prior to this, he was an operations generalist for the SERI MATS London program, has worked in product management, and received an MA in Philosophy from King’s College London.
Nina Wolff-Ingham - Office Manager - nina@safeai.org.uk
Nina is LISA’s Office Manager. She works to optimise the research centre’s functionality. Nina has a background within hospitality management and event coordination, which she uses alongside her BAHons in Marketing to help develop a cohesive working environment.
The Research Director, Operations Director, and Advisory Board (made up of member organisation leads) are in charge of new membership applications to LISA.
What are the most likely causes and outcomes if this project fails? (premortem)
The supported individual researchers & organisations might underperform expectations
Mitigations:
Support a diversity of individuals, organisations, and agendas: It is hard to know which AI safety agendas will be most promising, so we will support a wide range of research agendas to decorrelate failure. We will also guard against groupthink and foster a culture of humility and curiosity. If we are diversifying enough, we expect some fraction of ideas harboured here to fail.
Expert evaluation: LISA utilises the diverse technical AI safety expertise of the LISA team and advisory board (consisting of member org representatives) to evaluate membership admissions. We draw from experience selecting mentors and mentees for MATS and solicit advice and references from MATS mentors, PhD supervisors, or industry employers(when applicable). Finally, we will predominantly only admit as Residents those with a track record of quality AI safety research (or a convincing reference) and evidence of an epistemic attitude of humility and curiosity.
Regular impact analysis: We will routinely evaluate LISA Residents’ impact to ensure that their work is still best supported at LISA. LISA Research fellows will attend biweekly meetings with LISA’s Research Director for constructive critique and mentorship and will be expected to report their progress every 3 months.
Adaptive approach: LISA will regularly and rigorously gather insights into what is viewed as more or less impactful areas of AI safety research. We will then update and adapt research support priorities accordingly.
We might not be able to attract and retain top potential AI safety researchers
Mitigations:
Ensure that member organisations and residents value LISA (see testimonials above for current evidence).
We have established an advisory board consisting of resident member organisation leads. We have frequent meetings to ensure they feel heard, their needs are met, and LISA is the most attractive home for them.
LISA will continue evolving to accommodate its resident individuals’ and member organisations' growing needs and ambitions. Indeed, this is part of our motivation for moving to a larger office space.
Attractive proposition: Our activities, outlined in our theory of change, will make LISA an appealing option for top researchers who might also have lucrative non-AI safety job offers. The current calibre of researchers and organisations based at LISA indicates waiting demand (even though we have yet to advertise ourselves beyond word of mouth).
Facilitating industry roles: “If AI Safety research impact is power law distributed, won’t the best researchers all find jobs in industry instead of LISA anyway?”
We view securing industry AI safety jobs for LISA residents as aligned with our mission. Since the best talent will also have great non-AI Safety job offers, it is important to provide a productive and attractive home.
Many agendas are currently not supported in industry - we think it is important to support those advancing otherwise neglected research agendas.
We might be redundant given the UK Government’s AI Safety Institute and other initiatives
This is a misconception. The AI Safety Institute (AISI) has a very different focus, concentrating on evaluating the impact and risks of frontier models.Instead, LISA will be a place where fundamental research can happen. We also house organisations like Apollo which is a partner of AISI, so the relationship is complementary and collaborative. With regard to other initiatives opening, we think that the existence of more safety institutes is good for AI Safety and it is good for individuals to have the choice between a range of options.
What other funding are you or your project getting?
We welcome funding from multiple sources. Our first 6 months were funded by Open Philanthropy and Lightspeed Grants. We are seeking further funding from Open Philanthropy and the Survival and Flourishing Fund, amongst others.