This is a high-impact project. And the theory of change really resonates very well. Hope this project gets funded, goodluck, @Dr. Richard Armitage :)
You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
AI poses escalating risks to human health — from algorithmic bias and erroneous clinical decisions already harming patients, through AI-enabled bioweapons development, to catastrophic and existential scenarios that threaten human survival itself. These risks collectively constitute a public health problem, yet healthcare professionals have almost no systematic exposure to them. This project addresses that gap through three deliverables:
A cross-sectional survey of healthcare professionals' understanding of AI risks to health.
A short AI risk literacy module for clinicians
A free newsletter keeping health professionals informed of evolving AI health risk developments.
The project's goal is to build AI risk literacy among healthcare professionals — a trusted, politically influential workforce present in every country that shapes health policy worldwide.
Theory of change. Healthcare professionals are among the most trusted voices in society and have a track record of advocating against existential risks to humanity — the International Physicians for the Prevention of Nuclear War (IPPNW) was awarded the Nobel Peace Prize in 1985 for precisely this kind of work. A medical profession literate in AI risk could play an analogous role today, lending credibility and political weight to calls for AI governance. But clinicians cannot advocate against risks they are not aware of. This project begins with the necessary first step: building that awareness.
How this will be achieved — three integrated deliverables over 18 months:
Cross-sectional survey of healthcare professionals, establishing the baseline of their understanding of AI catastrophic and existential risk. Aim to publish as an open-access peer-reviewed paper, creating a citable evidence base for the field.
Short AI risk literacy module (2–4 hours, self-directed online) covering the spectrum of AI risks to health, from near-term clinical harms to catastrophic and existential scenarios. Piloted with 2–3 cohorts. Pre/post assessment. Freely available.
Free newsletter via Substack, keeping subscribers informed of evolving AI health risk developments. This is the dissemination and community-building arm — open to any healthcare professional, whether or not they have engaged with the survey or module, building an ongoing network of AI-risk-aware clinicians.
The survey establishes how large the awareness gap is. The module begins to close it. The newsletter sustains engagement as risks evolve.
Total budget: $50,000 over 18 months:
Staff time: $31,500
Survey costs (platform, distribution, participant incentives): $5,000
Module development (platform, content design, pilot, delivery): $4,000
Open-access publication fee (1 paper): $4,500
Contingency: $5,000
The newsletter runs at zero direct cost on Substack's free tier.
The project is led by Dr Richard Armitage, a practising GP (family doctor) and clinical academic.
Affiliations: Honorary Assistant Clinical Professor, University of Nottingham; MPhil candidate in Global Risk & Resilience, Centre for the Study of Existential Risk (CSER), University of Cambridge; incoming DPhil student, Nuffield Department of Medicine, University of Oxford (October 2026).
Relevant track record:
R Armitage. Artificial General Intelligence and Its Threat to Public Health. Journal of Evaluation in Clinical Practice. 2025;31(6):e70269. DOI: 10.1111/jep.70269
R Armitage R. Frontier large language models and clinical recognition of Category A bioterrorism agents: a cross-sectional analysis. Global Security: Health, Science and Policy. 14 March 2026; 11(1). DOI: 10.1080/23779497.2026.2643956
In press:
Recognising bioterrorism-related acute illness in clinical practice: an urgent research priority. Health Security.
Why public health needs to engage with Existential Risk Studies: a call for collaboration. BMJ Global Health.
Clinical preparedness activities for bioterrorism: an ethical analysis. JME Practical Bioethics.
Volunteer Global Health Founder (UK registered charity No. 1185528)
GiveHealth Co-Founder (directing donations to GiveWell-recommended charities)
Bridge Health Research Co-Founder
The module reaches few clinicians. Healthcare professionals are time-poor. Mitigation: Offer small incentives e.g. Continuous Professional Development recognition.
The newsletter fails to sustain engagement. Mitigation: realistic monthly issues.
Poor uptake of the survey. Mitigation: distribute through professional networks (RCGP, BMA, public health bodies); aim for a target of 500–1,000 responses but treat 200+ as the minimum viable sample for publication.
Even in the event of partial failure, the survey paper alone would produce a citable evidence base that others could build on.
No funding has been raised for this project to date. This Manifund project is being created as part of an application to the Survival and Flourishing Fund 2026 Main Round (deadline 22 April 2026), with Manifund proposed as fiscal sponsor.
Aashka Patel
36 minutes ago
This is a high-impact project. And the theory of change really resonates very well. Hope this project gets funded, goodluck, @Dr. Richard Armitage :)