2

Empowering AI Governance - Grad School Costs Support for Technical AIS Research

Not fundedGrant
$0raised

Summary

During my MS CS degree at Stanford University for the 2023-24 academic year, I plan to spend most of my academic time advancing technical AI safety research that is focused on empowering AI governance interventions. You can view a list of concrete research project ideas here that I have brainstormed and refined through feedback from other researchers.

With short timelines, limited current AI alignment progress, and newfound regulatory excitement around AI, technical research that accurately informs decision-makers of advanced AI risks and makes technically blocked policy implementations more feasible seems both ripe with low-hanging fruit and quite important to me and others in the AI safety space like Anka Reuel who will be mentoring and collaborating with me. Additionally, I’ve been discussing with Alan Chan about publishing a piece on open problems in this domain to recruit other researchers, and with Lewis Hammond about submitting findings of research like this for the UK AI Safety Summit.

Goals

  • The more specific list of project ideas above lists subgoals and a basic plan for each project.

  • These projects all aim to be impactful by addressing technical bottlenecks in various aspects of AI governance so that decision-makers in labs, third-party organizations, and government can have an easier time implementing governance proposals. These projects aim to be an enabler of better AI governance.

  • Additionally, many of them might reveal useful information about AI systems and help us predict the future of AI development, deployment, and use. To that end, this research could produce artifacts that are useful for sharing with government stakeholders in groups like the UK AI Safety Summit (by word of mouth, this seems to be the kind of research that the UK AI Foundation Model Taskforce is interested in) or the NIST Generative AI Public Working Group (of which I am a member).

  • Last, I expect these to be useful upskilling and credential-building projects for me or potentially for newer students I might mentor to collaborate on this research, for example through the Supervised Program for Alignment Researchwhich I’ve applied to as a research advisor.

Use of Funding

My desired budget range is $20,000-$84,291 for supporting my costs for attending Stanford University’s School of Engineering as a graduate student this year. I was fortunate to have all of my undergrad tuition and much of my living expenses covered by Stanford’s need-based financial aid due to the financial situation of me and my family, but financial aid is not available for graduate students (besides loans), so paying for my MS degree, the research I plan to do during it, and the subsequent credentials for my career in AI safety and governance afterwards is a bit tricky.

See an estimated budget by the university here.

For my mainline scenario, I am asking for $40,680, the price of just graduate tuition for the School of Engineering for this year (i.e. 100% for tuition support for me for 9 months).

The minimum I’m asking for is $20,000, representing about half the price of tuition. Note, however, that since my BS degree was in Computer Science at Stanford, I can transfer many courses over, lessening my course burden for this year and enabling me to focus the vast majority of my academic time on AI safety research rather than coursework.

The maximum I would accept is $84,291 as that would cover all my school costs, representing about 50% for tuition and 50% for other living expenses during the academic year based on the above budget.

Track Record

I have shown many strong signals for technical AI safety research despite getting seriously involved in it only about a year ago:

  • I led a small-group LLM alignment research project for several months that was submitted to the NeurIPS conference (preprint forthcoming)

  • For Summer 2023, I applied to and was accepted to several summer research opportunities and chose to participate in both the Existential Risk Alliance Cambridge fellowship as a technical AI safety research fellow and the Kreuger AI Safety Lab as a summer intern, leading a project on evaluating multi-agent language model cooperation in a novel variant of the game Diplomacy (ICLR submission forthcoming)

  • I participated in two Alignment Jam hackathons, winning 2nd and 1st place

  • I assembled a public up-skilling plan which several other budding AI safety researchers have found useful

  • At Stanford, I completed my BS degree in Computer Science in 2023 with a 3.994 GPA and have taken and done well in 6 grad-level machine learning courses.

  • Subsequently, I was accepted to Stanford’s coterminal MS degree in Computer Science with an explicit focus in my application on doing AI safety research and a plan to complete it in 1 year.

Additionally, I have strong organizational and management skills which make me confident in my ability to execute on these research projects:

  • I founded and lead Stanford AI Alignment (SAIA), a student group and research community at Stanford University organized under the Stanford Existential Risks Initiative (SERI).

  • SAIA’s mission is to accelerate students into highly-impactful careers in AI safety, build the AI alignment community at Stanford, and do excellent research that makes transformative artificial intelligence go well.

  • Under my leadership, SAIA has grown into a community of over 100 active core members focused on advancing AI safety and operates 3 AI alignment classes, a supervised research program, AI safety and AI governance paper discussion groups, AI safety hackathons, an intercollegiate AI safety retreat, coworking, a research symposium, social events, and other community-building programs.

  • You can view a retrospective of recent SAIA progress and plans here.

Failure Modes

  • None of these research ideas pan out very much, and nothing much comes of this in either direction.

    • I have many ideas that I’m planning to take a stochastic decision process attitude towards and a strong network of potential collaborators and researchers to ask for advice, so I’m hopeful that some of them will be positively impactful.

  • Some of this research gets out but is net negative.

    • It seems pretty easy to keep this research from becoming net negative, as most direct research failures will just mean a project does not work, and I plan to carefully discuss the implications of positive research results with others in the AI governance space before approaching decision makers with it so we can mitigate the risks of giving information that later ends up leading to bad decisions.

  • I drop out of Stanford before completing the MS degree.

    • This might actually be the correct choice if large changes start happening in the world or if I find a much better professional position that makes sense for me to join.

    • In such an event, I would hand back any funding that was to be used for future time and research in this grant.

Alternative Funding Sources

I will likely apply for alternative funding, as I plan to carry out this research and complete the MS CS degree, but I have to find some way to pay for it. I’m aware that alignment grantmaking is funding-constrained right now, so I’m not especially confident that other funding would work out.

I’ve also investigated funding through the university but have come up dry: Stanford has a large and competitive ML graduate community where even many Ph.D. students struggle to get funding. I asked several people including my advisor, Mykel Kochenderfer, about the possibility of getting a Research Assistantship (RAship), but unfortunately, these are only usually given to the top CS Ph.D. students and not first-year MS students like me. Additionally, I applied to Course Assistantships (CAship) for 10 classes, but these are also competitive and I was not offered any of them (it also seems unclear to me if the funding help from that would be worth spending 20 hours/week TAing a course).

Thank you for your attention!

Austin avatar

Austin Chen

about 1 year ago

I'm funding half of the requested $10k ask based on my prior experience chatting with Gabe (see writeup here); Gabe didn't actually withdraw that money at the time, so I'm happy to follow through on that now.

GabeMukobi avatar

Gabe Mukobi

about 1 year ago

Thank you so much, Austin!

vincentweisser avatar

Vincent Weisser

about 1 year ago

is this project still seeking funding or un-related to this one? https://manifund.org/projects/gabriel-mukobi-summer-research

GabeMukobi avatar

Gabe Mukobi

about 1 year ago

Still seeking funding and un-related to that one! My summer research has almost concluded (we're polishing it up for ICLR and Neurips workshop submissions right now), this project is for my next series of research projects.

GauravYadav avatar

Gaurav Yadav

about 1 year ago

Hmm, I am quite surprised this hasn't been funded. Maybe I am missing out on something but these ideas seem pretty good at first glance.

GabeMukobi avatar

Gabe Mukobi

about 1 year ago

Haha thanks! Tbf there are a lot of projects on Manifold now and this grant ask is not insignificant.

GauravYadav avatar

Gaurav Yadav

about 1 year ago

Hmm - I’m not sure about that assessment. I personally don’t think there’s a lot of high quality submissions; yours compared to a lot does seem quite good IMO. Sure it’s not insignificant but I’m surprised to not see even a bit of funding commitment from any of the regrantors up until now. It could be that there’s a lot of internal discussion happening.