Zubnet AIAprenderWiki › Mechanistic Interpretability
Safety

Mechanistic Interpretability

Mech Interp, MI
Un enfoque de investigación que trata de entender qué está pasando dentro de redes neuronales al nivel de neuronas individuales, circuitos y features — no solo lo que el modelo produce, sino cómo computa esas salidas. La meta es hacer reverse-engineering de los algoritmos que las redes neuronales aprenden, de la forma en que harías reverse-engineering de software compilado para entender su código fuente.

Por qué importa

Si vamos a confiar en la IA con decisiones importantes, necesitamos entender cómo las toma. La interpretabilidad mecánica es el intento más riguroso de esto — no solo preguntar «¿qué hizo el modelo?» sino «¿qué algoritmo implementó y por qué?». Es central a la investigación de seguridad IA, particularmente en Anthropic, y está produciendo resultados reales: investigadores han identificado circuitos para identificación de objeto indirecto, induction heads, y aritmética modular dentro de Transformers.

Deep Dive

The field draws on a key observation: neural networks don't store information in individual neurons (usually). Instead, they use superposition — many features are encoded as directions in activation space, with individual neurons participating in many features simultaneously. A neuron that seems to respond to "the concept of water" might actually respond to a superposition of features related to liquids, transparency, flow, and specific contexts. Disentangling these superposed features is one of the field's central challenges.

Sparse Autoencoders

One of the most promising recent tools is the sparse autoencoder (SAE). You train an autoencoder to reconstruct a model's internal activations, but with a sparsity constraint that forces it to use only a few features at a time. The learned features often correspond to interpretable concepts — a feature for "code comments," one for "French text," one for "mathematical reasoning." Anthropic published influential work using SAEs to find interpretable features in Claude, identifying millions of features including ones for deception, specific concepts, and language patterns.

From Features to Circuits

Beyond individual features, mechanistic interpretability tries to trace circuits: how does information flow through the network to produce a specific behavior? For example, "induction heads" are two-attention-head circuits that implement in-context learning by pattern-matching: if the model sees "A B ... A" it predicts B. These circuits have been found in models from 2-layer toy Transformers to full-scale LLMs. Understanding circuits at scale remains an open challenge, but progress is accelerating.

Conceptos relacionados

← Todos los términos
← MCP Memory →