GabeMukobi avatar
Gabe Mukobi

@GabeMukobi

Stanford CS BS->MS 2018-24 – AI Safety, Alignment, and Governance

https://gabrielmukobi.com/
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

I am a Stanford M.S. student in Computer Science building a career in AI safety, alignment, and governance research. Additionally, as an AI safety field-builder, I lead the Stanford AI Alignment student group and research community.

Projects

Comments

GabeMukobi avatar

Gabe Mukobi

about 1 year ago

Haha thanks! Tbf there are a lot of projects on Manifold now and this grant ask is not insignificant.

GabeMukobi avatar

Gabe Mukobi

about 1 year ago

Still seeking funding and un-related to that one! My summer research has almost concluded (we're polishing it up for ICLR and Neurips workshop submissions right now), this project is for my next series of research projects.

GabeMukobi avatar

Gabe Mukobi

over 1 year ago

Update: This project now has funding (I'm working with David Krueger's lab and able to use their funding), so I won't be withdrawing funds from this, and funders should probably look elsewhere for opportunities!

GabeMukobi avatar

Gabe Mukobi

over 1 year ago

Thanks for this grant and for the Manifund platform! I should let you know that I'll be slightly pivoting this project to focus first on building a general sum version of Diplomacy or a similar game for language models with the intention of evaluating and improving LM cooperation (since normal Diplomacy is zero sum which leads to poor cooperative equilibria). I still want to pursue making a scary demo for multipolar risks in open-ended environments, but in the side to start as it seems more unstructured, and I hope the first thing will help lead to this second thing.

I'll probably be using similar API model resources and will write up more about this in the next week, but wanted to share in case this is getting outside the scope of what funders here want to support.