You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Project Summary
I am an independent researcher with 25+ years of experience in qualitative stakeholder research, an appointed expert at the Global AI Ethics Institute (Paris), and the author of three peer-reviewed papers on AI behavioral assessment and cross-cultural accountability:
→Beyond Technical Compliance series (3 papers):
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6098766
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6307162
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6513678
Google Scholer:
https://scholar.google.com/citations?hl=ja&user=uCq_9a0AAAAJ
This project validates the CAAF (Cooperative Alignment Assessment Framework): a socio-technical audit tool designed to identify AI behavioral manipulation across diverse cultural contexts. Existing AI safety benchmarks are developed predominantly within Western linguistic and institutional frameworks, leaving non-Western populations systematically underserved by current accountability mechanisms. CAAF directly addresses this gap.
The critical next milestone is expert consultation at the International Conference on Large-Scale AI Risks 2026 (June 23–24, hosted by KU Leuven and the Future of Life Institute)—an international gathering of AI policy researchers, ethicists, and governance specialists. This conference presents a time-sensitive opportunity to stress-test the CAAF Audit Checklist with global stakeholders and secure the cross-cultural validation necessary for adoption as an open auditing standard.
What are this project's goals? How will you achieve them?
Goal 1 — Framework Validation The CAAF Audit Checklist will be reviewed and stress-tested by international experts attending the International Conference on Large-Scale AI Risks 2026 in Leuven. The target outcome is documented expert endorsement and a concrete set of revisions that strengthen the tool's cross-cultural applicability.
Goal 2 — Open Standard Publication A validated version of the CAAF framework will be published on SSRN and a dedicated open-access platform, formatted for use by NGOs, policymakers, and independent auditors. This will be the first publicly available, cross-culturally validated checklist for auditing AI behavioral manipulation.
Goal 3 — Team and Collaborator Recruitment The Leuven consultation will serve as a recruiting ground for a multidisciplinary team of auditors and ethicists. Building this network is essential to moving CAAF from a single-researcher framework to a sustained, internationally governed auditing standard.
How will this funding be used?
This project has a minimum funding threshold of $2,500 and a goal of $5,000.
At $2,500 (minimum):
Item
Amount
Round-trip travel (Japan–Leuven)
$2,200
Conference registration fee
$300
Total
$2,500
Accommodation has been arranged separately through a supporting organization, at no cost to this grant.
At minimum funding, the Leuven consultation proceeds. Expert validation of the CAAF Audit Checklist is completed and documented. This alone delivers the core milestone: an internationally reviewed framework ready for open publication on SSRN.
At $5,000 (full goal):
Item
Amount
Round-trip travel (Japan–Leuven)
$2,200
Conference registration fee
$300
CAAF audit platform development (multilingual)
$1,500
SSRN open-access formatting & dissemination
$1,000
Total
$5,000
Accommodation has been arranged separately through a supporting organization, at no cost to this grant.
Full funding enables the complete roadmap: validated framework + open-access platform + multilingual localization, making CAAF immediately usable by NGOs and policymakers across non-Western contexts.
The conference runs June 23–24, 2026. This is a hard deadline—missing it means waiting for the next comparable international validation opportunity hosted by an institution of KU Leuven and Future of Life Institute's standing.
Who is on your team? What's your track record on similar projects?
I am currently the sole researcher on this project. My relevant credentials and track record:
25+ years of qualitative stakeholder research, with a focus on technology's impact on human agency
Appointed Expert, Global AI Ethics Institute, Paris
Three published papers on AI behavioral assessment and cross-cultural accountability — → Beyond Technical Compliance series (3 papers):
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6098766
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6307162
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6513678
Presented at the ADBI-JICA AI Forum: the CAAF framework was presented to an international audience of development economists and AI policy specialists
The Leuven consultation is explicitly designed to grow this into a multidisciplinary team. Target collaborators include AI ethicists, legal scholars in data protection, and qualitative researchers from non-Western contexts including Asia, Africa, and Latin America.
What are the most likely causes and outcomes if this project fails?
The primary risk is timing: if the Leuven consultation does not proceed, the opportunity to validate CAAF with this specific cohort of international experts convened by KU Leuven and the Future of Life Institute is lost, and the publication timeline is significantly delayed.
A secondary risk is remaining a single-researcher project. Without the collaborator network that Leuven enables, CAAF's long-term governance and adoption will be slower and more fragile.
The broader consequence is that AI accountability tools available to non-Western NGOs and policymakers will continue to default to frameworks designed without their contexts in mind—a gap that current major benchmarks do not address.
How much money have you raised in the last 12 months, and from where?
None. All research and international outreach to date have been self-funded.