You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
I am Sarah (Yuanyuan) Sun, an ex-ML engineer and AI safety field builder in China. This grant would provide travel support for me to attend ControlConf in San Francisco from April 17th-19th, 2026. I have asked the conference for financial support and they have not responded yet, but I need to book travel very soon if I am going to attend.
Why I am interested in AI Control and ControlConf:
I ran the Apart Research AI Control hackathon node (physical site) in Shanghai, which attracted eleven teams. Some attendees included: a Chinese big tech AI safety lead, an adversarial ML postdoc, and a Foresight fellow/FLI fellow.
I am working on an academic paper on AI Control, in collaboration with four people who attended the hackathon. I’m interested in transitioning to be a researcher in AI Control, alongside my ongoing field building activities in AI safety.
China is seeing a strong need for AI Control methods due to the enormous adoption of OpenClaw, and I am well-positioned to launch research efforts in the field with hackathon attendees that I remain in touch with.
I may be the only attendee (or one of a very few) coming from mainland China (I asked other AI safety orgs in China, and they won’t be attending). I am therefore well placed to strengthen the network between China and other AI Control researchers.
At the conference, I will:
Establish connections for myself and other people in the Chinese AI safety community who cannot easily attend a conference outside China.
Find collaborators to provide feedback on my AI Control paper (in ideation stage).
Learn more about the technical challenges and approaches in AI control from the workshop on April 17th.
This funding will be used to pay for travel expenses only, including:
$1075 for a flight from Shanghai to San Francisco
$400 for four nights of accommodation
I am Sarah (Yuanyuan) Sun. I am the only recipient of these funds.
A bit more about me:
I have worked in AI safety for two years, including writing a first-author paper on AI Governance which was read by the EU AI office.
I co-organize the group “Open Community for AI Safety China” which has ~1000 views on biweekly frontier AI safety videos with guests from Shanghai AI lab, ex-Anthropic, University of Oxford, and soon, Redwood Research. The group has 5000+ members on Zhihu (Chinese version of Quora), and collaborates with a 20,000-follower tech influencer account.
I also maintain a substantial network in AI safety by attending conferences (such as EA Global Bay, IASEAI 2026), meeting other people interested in collaborating with Chinese researchers.
I contribute to the safety community giving talks in the UK/US (including a recent one for the ERA AI fellowship) on the Chinese AI safety ecosystem.
I have a visa and can attend the conference with sufficient funding.
I am only using personal funds to attend this conference.