Reverse Dependencies of transformer-lens
The following projects have a declared dependency on transformer-lens:
- auto-circuit — no summary
- Auto-HookPoint — Make any model compatible with transformer_lens
- belief-state-superposition — Investigating belief state representations of transformers trained on Hidden Markov Model emissions
- codebook-features — Sparse and discrete interpretability tool for neural networks
- devinterp — A library for doing research on developmental interpretability
- e2e-sae — Repo for training sparse autoencoders end-to-end
- feature-lens — In-depth visualizations for SAE features
- graphpatch — graphpatch is a library for activation patching on PyTorch neural network models.
- hugginglens — no summary
- mamba-lens — TransformerLens port for Mamba
- mistral-sae — Sparse AutoEncoder to decode Mistral LLM
- pattern-lens — no summary
- reverseAbliterator — A package for reverse ablitation of language models
- sae-dashboard — Open-source SAE visualizer, based on Anthropic's published visualizer. Forked / Detached from sae_vis.
- sae-lens — Training and Analyzing Sparse Autoencoders (SAEs)
- sae-vis — Open-source SAE visualizer, based on Anthropic's published visualizer.
- smol-sae — Minimal implementation of SAEs
- token-trace — Transformer token flow visualizer
- transcoders-slim — A template for python projects in PDM
- transpector — Visually inspect, analyse and debug transformer models. Aimed at reducing cycle times for interpretability research and lowering the barrier to entry.
- tuned-lens — Tools for understanding how transformer predictions are built layer-by-layer
1