Zubnet AIAprenderWiki › Sparse Autoencoder
Models

Sparse Autoencoder

SAE
Una red neuronal entrenada a reconstruir las activaciones internas de un modelo a través de un cuello de botella con una restricción de sparsity — solo unas pocas features pueden estar activas a la vez. Las features aprendidas a menudo corresponden a conceptos interpretables (temas específicos, patrones lingüísticos, estrategias de razonamiento), haciendo de los SAEs la herramienta primaria para desentrañar las features superpuestas dentro de grandes modelos de lenguaje.

Por qué importa

Los sparse autoencoders son el microscopio de la interpretabilidad mecánica. Los LLMs empacan miles de features en cada capa a través de superposición, haciendo que las neuronas individuales sean no-interpretables. Los SAEs descomponen estas representaciones superpuestas en features individuales e interpretables. Anthropic usó SAEs para identificar millones de features en Claude, incluyendo features para engaño, conceptos específicos y comportamientos relevantes a seguridad.

Deep Dive

Architecture: the SAE takes a model's activation vector (dimension d_model, e.g., 4096) and encodes it into a much larger sparse representation (e.g., 64K features, of which only ~100 are active for any given input). It then decodes back to d_model and is trained to minimize reconstruction error. The sparsity constraint (L1 penalty on the hidden layer) forces the SAE to use only a few features per input, ensuring each feature is specific rather than diffuse.

What SAEs Find

When trained on LLM activations, SAEs discover interpretable features: a "Golden Gate Bridge" feature that activates on text about the bridge, a "Python code" feature, a "French language" feature, a "sycophantic agreement" feature, and so on. These features are more interpretable than individual neurons because the sparsity constraint separates overlapping concepts that neurons represent in superposition. Anthropic's research found features ranging from concrete (specific entities) to abstract (deception, uncertainty).

Applications Beyond Interpretation

SAE features can be used for more than understanding: clamping a feature to zero suppresses the corresponding behavior (deactivating a "deception" feature), while amplifying a feature strengthens it. This opens the possibility of fine-grained behavioral control without retraining. However, the technique is still experimental — feature interactions are complex, and modifying one feature can have unintended effects on others due to residual superposition.

Conceptos relacionados

← Todos los términos
← Sparse Attention Speculative Decoding →