You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Southeast Asia currently has no dedicated hub for AI policies, frameworks and laws as well as related events which would be useful for anyone looking to learn more and get involved in the region. SEA Observatory seeks to provide a holistic view of the state of AI policy and governance in SEA, providing policy makers a tool for writing better and safer AI policies. Additionally, it will help others such as NGOs/NPOs and AI safety researchers in the region understand the space better by mapping out the AI policy landscape in SEA, enabling them to figure out where to focus their efforts to maximize impact. This observatory also intends to be a tool for AI policy labs in Asia and serve as a baseline cross-border support for AI risk management in Southeast Asia.
Most resources and AI policy leaders are highly concentrated within the Global North, with the EU and US leading the charge in developing AI policies and regulations starting with the EU AI act and US’ NIST AI risk management framework. There exist institutes working on tracking policies within specific regions such as the EU AI standards hub, as well as organizations attempting to track globally such as OECD and CAIDP. Furthermore, countries in other regions have begun to understand the value of having a centralized observatory of AI policies, for example Argentina has proposed a Federal Observatory on AI (OFIA). However, no specific one exists for the Southeast Asia region. Most of the countries have limited research and knowledge about the impacts of AI in the region, with few policies to address possible challenges due to AI. Therefore, SEA Observatory seeks to fill in this gap.
The SEA Observatory will be structured as 5 buckets of information: (1) Laws and Policies, (2) Reports and Indexes, (3) Important Events, (4) Funding and Job Opportunities, (5) Experts Directory, as well as split based on the 11 countries in Southeast Asia. This will serve as a spine for cross-border AI risk management and governance beta monitoring tools in Southeast Asia. The content will be delivered through a website with a Map view of Southeast Asia, as well as a Database view.
Short-term goals:
Reduce barriers to writing and introducing SEA specific AI policies within each country
Encourage cross-country region-specific learning and development of policies
Serve as a public database for AI Safety initiatives and Policy labs in Southeast Asia
Long-term goals:
Encourage the development of regional standards for AI or interoperable AI policies through better collaboration among countries
Citizens and NGOs/NPOs can understand their country’s positioning on AI, and lobby or push for regulations/policies promoting AI safety
Act as a non-sectoral baseline tool for transnational AI risk management and governance in Asia.
These goals will be achieved through developing a centralized hub to track AI policies, frameworks and laws within Southeast Asia (SEA Observatory)
This will allow civil servants to:
Read policies from other departments ensuring there is a cohesive policy positioning within the country
Read policies from other countries, helping with adoption of good policies and building intuition and understanding of what works/does not work
Figure out if there are blindspots or areas where the policies are lacking
Additionally, other NGOs/NPOs and AI safety policy researchers will benefit since SEA observatory allows them to easily read what their governments are doing, instead of having to trawl through press releases and government documents:
Map out the AI policy landscape and figure out what protections are lacking
Figure out how different countries are approaching AI adoption
Easier research process and comparisons of each country’s stance on AI
Wage Costs
1 translator and resource person per country: 11*0.1 FTE = 1.1 FTE total
The translator and resource person will translate the information into English and summarize the key takeaways. As the field develops, we hope to hire AI safety experts from the respective countries for this task, and they can provide additional insights and thoughts on the latest updates.
1 data collection & validation associate working 0.1 FTE for each 5 buckets = 0.5 FTE total
Associates will work on collecting data for the bucket, as well as vetting entries submitted by others, to keep the database up to date with verifiable information.
This is essential considering the potential failure modes such as information hazards and the dual use nature of the gathered information and research. We hope to have a human in the loop and ensure rigorous auditing, feeding into AISA’s safety culture.
An additional task could include monthly round-ups of the updates within their bucket
Web developer = 0.4 FTE
The web developer will be in charge of designing and developing the website, building the dynamic Map view and the Database view, as well as integrating it with any additional data sources we come across. They will be in charge of handling the servers and database as well.
Subtotal = 2 FTE = $100k/year
Hosting Costs
Domain cost: $10-50/year
AWS hosting cost: $50/month = $600/year
Depends on traffic, but for a dynamic website we expect the upper bound on cost to be $50/month
DB hosting (Airtable) maximum $54/month = $650/year
Subtotal = $1300/year
Overhead costs
$100,000 Wage costs (2 FTE, 12 months)
$1,300 Hosting costs
$10,130 Buffer of 10 %
$9,117 Ashgro’s 9% sponsorship fee
$120,547 Total
This will support the project over the next year (12 months). The minimum goal of 20k will support the project for the next 2 months until the end of 2024, and further funding will be sought at that point.
This is a new project under a registered charity (Ashgro). Shi Hao Lee, AI Safety Asia’s (AISA) technical lead, is driving this project. Shi Hao is currently pursuing an MEng in Computer Science at Cornell Tech, with a focus on AI policy. He is working on this as a thesis project, under the mentorship of Professor Helen Nissenbaum. Concurrently, this project falls under AISA’s field-building efforts in the region, and will complement other AISA programs such as an AI safety governance program for civil servants and a roundtable series. This project is also co-led by Lyantoniette Chua and Edward Tsoi - Co-founders of AISA.
Shi Hao is a Singapore Government scholar and will be returning to work in this AI Policy post-graduation. Previously, he was working on NLP research at the Lawrence Berkeley National Laboratory’s NucScholar team, building a search engine for Nuclear Science research papers. He was also part of a machine learning student organization at UC Berkeley dedicated to tech consulting and creative ML projects, where he spearheaded innovative solutions for real-time data analysis and insights.
Lack of data availability and difficulty of data collection
Counter measures:
Start with publicly available data collated by global AI policy trackers, followed by crowd-sourcing data collection by releasing a public form for users to add policies they hear about. The crowd-sourced data will then be validated before being added to the public facing site.
Hire a data collection & validation associate for each bucket, who is dedicated to collecting the data as well as validating the crowd-sourced data.
Language barriers: policies may be written in different languages across the region, making it difficult for users to understand what is happening from different countries. The risk is minimal for users within each country
Counter measures:
To begin, SEA observatory may include a simple google translate option for policies (with a clear disclaimer that it is translated).
With funding, we will hire a translator and resource person for each country, who will translate the information into English and summarize the key takeaways. As the field develops, this could also involve AI safety experts from the respective countries.
We have raised $300 from BlueDot Rapid Grant to cover hosting costs over the next 3 months. Besides that, we are currently working on bundling this project with other projects at AI Safety Asia to apply for Open Philanthropy funding.