Zubnet AILearnWiki › Superposition
Fundamentals

Superposition

Feature Superposition, Polysemanticity
The phenomenon where neural networks encode many more features (concepts, patterns) than they have neurons, by representing features as directions in activation space rather than dedicating individual neurons to individual features. A single neuron participates in encoding dozens of features simultaneously, and each feature is distributed across many neurons.

Why it matters

Superposition is why neural networks are hard to interpret and why mechanistic interpretability is challenging. If each neuron represented one concept (like "the concept of dogs"), interpretation would be straightforward. Instead, concepts are smeared across neurons in overlapping patterns. Understanding superposition is key to understanding both how neural networks compress information and why they sometimes behave unexpectedly.

Deep Dive

The key insight: a model with 4096 neurons per layer can represent far more than 4096 features by using the full 4096-dimensional space. Each feature is a direction (a vector) in this space, and features can overlap as long as they're not too similar. This is mathematically analogous to compressed sensing — you can store more signals than dimensions if the signals are sparse (only a few are active at any time).

Why Models Do This

Models learn superposition because the world has more features than any practical model has dimensions. A model needs to represent thousands of concepts (colors, emotions, syntax rules, factual knowledge, code patterns), but might only have 4096 dimensions per layer. Superposition lets it pack all these features into the available space, at the cost of some interference when multiple overlapping features activate simultaneously.

Implications for Safety

Superposition has direct implications for AI safety. If a "deception" feature is superimposed with other benign features, it's hard to detect and remove. Sparse autoencoders (used in mechanistic interpretability) try to disentangle superposition by finding the individual feature directions, but the number of features in a large model may be enormous — Anthropic identified millions of interpretable features in Claude. Understanding and controlling superposition is a central challenge for making AI systems reliably safe.

Related Concepts

← All Terms
← Suno Supervised Learning →