Zubnet AIAprenderWiki › Superposition
Fundamentos

Superposition

Feature Superposition, Polysemanticity
El fenómeno donde las redes neuronales codifican muchos más features (conceptos, patrones) que neuronas tienen, representando features como direcciones en el espacio de activación en vez de dedicar neuronas individuales a features individuales. Una sola neurona participa en codificar docenas de features simultáneamente, y cada feature se distribuye a través de muchas neuronas.

Por qué importa

La superposición es por qué las redes neuronales son difíciles de interpretar y por qué la interpretabilidad mecánica es desafiante. Si cada neurona representara un concepto (como «el concepto de perros»), la interpretación sería directa. En cambio, los conceptos están diseminados a través de neuronas en patrones superpuestos. Entender la superposición es clave para entender tanto cómo las redes neuronales comprimen información como por qué a veces se comportan inesperadamente.

Deep Dive

The key insight: a model with 4096 neurons per layer can represent far more than 4096 features by using the full 4096-dimensional space. Each feature is a direction (a vector) in this space, and features can overlap as long as they're not too similar. This is mathematically analogous to compressed sensing — you can store more signals than dimensions if the signals are sparse (only a few are active at any time).

Why Models Do This

Models learn superposition because the world has more features than any practical model has dimensions. A model needs to represent thousands of concepts (colors, emotions, syntax rules, factual knowledge, code patterns), but might only have 4096 dimensions per layer. Superposition lets it pack all these features into the available space, at the cost of some interference when multiple overlapping features activate simultaneously.

Implications for Safety

Superposition has direct implications for AI safety. If a "deception" feature is superimposed with other benign features, it's hard to detect and remove. Sparse autoencoders (used in mechanistic interpretability) try to disentangle superposition by finding the individual feature directions, but the number of features in a large model may be enormous — Anthropic identified millions of interpretable features in Claude. Understanding and controlling superposition is a central challenge for making AI systems reliably safe.

Conceptos relacionados

← Todos los términos
← Suno Supervised Aprendering →