Cadenza Labs


We are a new AI Safety Org, focusing on Conceptual Interpretability

$2,710total balance
$0charity balance
$2,710cash balance

$0 in pending offers

About Me

The goal of our group is to do research which contributes to solving AI alignment. Broadly, we of course aim to work on whatever technical alignment projects have the highest expected value. Our current best ideas for research directions to pursue are in interpretability. More about our research agenda can be found here.