We are a new AI Safety Org, focusing on Conceptual Interpretabilityhttps://cadenzalabs.org/
$0 in pending offers
The goal of our group is to do research which contributes to solving AI alignment. Broadly, we of course aim to work on whatever technical alignment projects have the highest expected value. Our current best ideas for research directions to pursue are in interpretability. More about our research agenda can be found here.