9

Exploring feature interactions in transformer LLMs through sparse autoencoders

ActiveGrant
$8,500raised
$15,000funding goal

Project summary

Sparse autoencoders (SAEs) are good at extracting distinct, significantly monosemantic features out of transformer language models. An effective dictionary of feature decompositions for a transformer component roughly sketches out the set of features that component has learned.

The goal of this project is to provide a more intuitive and structured way of understanding how features interact and form complex high-level behaviors in transformer language models. To do that, we want to define and explore the feature manifold, understand how features evolve over different transformer components, and how we can use these feature decompositions to find some interesting circuits. 

What are this project's goals and how will you achieve them?

  1. SAE Feature Decomposition for Circuit Search Algorithm Development

    1. A major challenge in using sparse autoencoders for future interpretability work is turning feature decompositions into effective predictive circuits. Existing algorithms, such as ACDC, are based on pruning of computation graphs and not easily scalable to larger models.

    2. Cunningham et al. demonstrate that causal links can be identified by modifying features from an earlier layer and observing the impact on subsequent layer features. 

      1. A promising approach for a circuit search algorithm would be to observe changes in feature activations upon ablating features in a previous layer. We could focus on a subset of the input distribution to simplify the analysis and find more interpretable features. 

      2. For modeling end-to-end behaviors, we would use an unsupervised learning algorithm to learn clusters of features and identify similar ones (i.e. learnfeature manifolds). We would then use a similarity metric (such as cosine similarity) to group features and use ACDC over the resulting feature decompositions.

        1. Further, investigate how feature splitting occurs in the context of these manifolds. Are features divided into smaller manifolds, or do they split within a manifold?

  2. Optimizing feature extraction

    1. Connecting dictionaries learned on different transformer components

      1. How do the learned features evolve over different components of the model?

        1. For example, how do the features learned by the MLPs, with their richer dimensionality, relate to those learned by the attention heads and the residual stream?

      2. What metrics can we develop to better understand and measure these relationships?

      3. What mappings are suitable for connecting dictionaries? Can we use gradient descent to find the connections between two SAEs learned on different components of the transformer model?

    2. What specific properties make a feature more suitable for inclusion in the residual stream?

      1. If a feature is not added to the residual stream at a certain layer (L), can it emerge (if so, under what conditions?) in a subsequent layer (L+k)? 

    3. Can we predict whether and when a model will exhibit end-to-end behaviors by tracking the addition of constituent features to the residual stream at various stages of training?

  3. Efficiency of feature learning in SAEs 

    1. If a model is trained on a dataset D’ which is a subset of a dataset D, do the features it learns form a clear subset of the features learned by the same model trained on the entire dataset D? 

      1. Does the efficiency of feature learning diminish?

      2. Can we train smaller SAEs to find subsets of features?

      3. Analysis of feature quality - are features learned on D’ less or more noisier than features learned over D?

    2. Do features learned by the model on a subset dataset generalize well to the full dataset?

    3. Can we develop better metrics for comparing feature sets?

How will this funding be used?

The main use of the grant will be to acquire the compute resources needed for running the experiments. I would train multiple sparse autoencoders (SAEs) using A100 GPUs for a range of open sourced models, including for fine tuned and base models for comparisons. I would experiment with different SAE architectures, feature evolution, etc. The remaining funding amount would be to provide a modest salary for me, as I am currently funding this out of pocket.

What are the most likely causes and outcomes if this project fails? (premortem)

  1. It’s possible that 

    1. the algorithm for circuit search using feature decompositions does not outperform traditional methods, or does not scale well. 

    2. Navigating the feature manifold requires a lot of computational resources and proves to be difficult.

  2. Project has the potential to attract the attention of capabilities labs.

What other funding are you or your project getting?

  1. None - I am funding this project out of pocket at the moment. That is primarily why I am applying for the grant, because I want to expand the scope of the project which requires additional computational resources.

  2. I want to pursue alignment work (focused on interpretability), full-time, and will be applying for funding for that.

donated $8,500
MarcusAbramovitch avatar

Marcus Abramovitch

over 1 year ago

About time I wrote up my reasoning for this grant:

Main points in favor of this grant


Kunvar has set out an ambitious research agenda for himself which would lead to some quite exciting papers. This donation also will get a lot of Mech Interp per dollar spent if successful (like 10x more per dollar than most other stuff I come across given the little compute and low overhead)

The main reason for funding this though is that I know Kunvar is very smart due to many interactions I have with him and I doubt this will be as apparent to other people so I am pretty uniquely able to make this grant.

Donor's main reservations


Independent alignment work without any mentorship doesn't have a fantastic track record. That said, I am expecting this to launch Kunvar into something like MATS or another program to get him some mentorship on this and he has mentioned collaborating on these projects with others.

Process for deciding amount


This was the amount needed to cover the compute and the minimum salary (which was super low). If more were needed, I could make that happen but this just gets the project off the ground.

Conflicts of interest

None.

Austin avatar

Austin Chen

over 1 year ago

Approving as part of our technical AI safety portfolio!

GauravYadav avatar

Gaurav Yadav

over 1 year ago

If you haven't been, there's a lot of discussion about this on ElutherAI.

Kunvar avatar

Kunvar Thaman

over 1 year ago

@GauravYadav Thanks!