Zubnet AIAprenderWiki › Induction Head
Fundamentos

Induction Head

Un circuito específico de dos cabezas de atención descubierto en Transformers que implementa in-context learning por pattern matching. Si el modelo ha visto el patrón «A B» antes en el contexto y ahora ve «A» nuevamente, la induction head predice que «B» seguirá. Este mecanismo simple se cree que es un bloque de construcción fundamental de cómo los LLMs aprenden de ejemplos en su contexto.

Por qué importa

Las induction heads son el circuito mejor entendido en interpretabilidad mecánica — un ejemplo concreto de cómo los Transformers implementan un algoritmo útil a partir de pesos aprendidos. Explican por qué funciona el prompting few-shot: cuando das ejemplos, las induction heads detectan el patrón y lo aplican. Entender las induction heads provee una base para entender comportamientos aprendidos más complejos.

Deep Dive

The circuit uses two heads across two layers. The first head (a "previous token head" in an earlier layer) copies information about which token preceded the current one. The second head (the actual "induction head" in a later layer) uses this information to complete patterns: if token B was preceded by A earlier in the context, and A appears again, the induction head boosts the prediction of B. This is a simple but powerful form of in-context learning.

Discovery and Verification

Olsson et al. (2022, Anthropic) identified induction heads through careful analysis of attention patterns in Transformers of various sizes. They observed a phase change during training: induction heads form suddenly, and their formation coincides with a dramatic improvement in the model's ability to do in-context learning. This suggests that induction heads are not just one of many circuits but a foundational capability that enables higher-level in-context learning.

Beyond Simple Patterns

Real-world in-context learning is more complex than "A B ... A → B." Models learn to generalize patterns: "capital of France is Paris, capital of Germany is Berlin, capital of Japan is..." requires understanding the abstract pattern, not just copying. Research suggests that more complex induction-like circuits build on the basic induction head mechanism, composing it with other circuits to handle abstraction and generalization.

Conceptos relacionados

← Todos los términos
← Image Generation Inference →