Zubnet AIAprenderWiki › Induction Head
Fundamentos

Induction Head

Um circuito específico de duas cabeças de atenção descoberto em Transformers que implementa in-context learning por pattern matching. Se o modelo viu o padrão “A B” antes no contexto e agora vê “A” novamente, a induction head prevê que “B” seguirá. Esse mecanismo simples é acreditado ser um bloco de construção fundamental de como LLMs aprendem de exemplos em seu contexto.

Por que importa

Induction heads são o circuito melhor entendido em interpretabilidade mecanística — um exemplo concreto de como Transformers implementam um algoritmo útil a partir de pesos aprendidos. Elas explicam por que prompting few-shot funciona: quando você dá exemplos, induction heads detectam o padrão e o aplicam. Entender induction heads provê uma base para entender comportamentos aprendidos mais complexos.

Deep Dive

The circuit uses two heads across two layers. The first head (a "previous token head" in an earlier layer) copies information about which token preceded the current one. The second head (the actual "induction head" in a later layer) uses this information to complete patterns: if token B was preceded by A earlier in the context, and A appears again, the induction head boosts the prediction of B. This is a simple but powerful form of in-context learning.

Discovery and Verification

Olsson et al. (2022, Anthropic) identified induction heads through careful analysis of attention patterns in Transformers of various sizes. They observed a phase change during training: induction heads form suddenly, and their formation coincides with a dramatic improvement in the model's ability to do in-context learning. This suggests that induction heads are not just one of many circuits but a foundational capability that enables higher-level in-context learning.

Beyond Simple Patterns

Real-world in-context learning is more complex than "A B ... A → B." Models learn to generalize patterns: "capital of France is Paris, capital of Germany is Berlin, capital of Japan is..." requires understanding the abstract pattern, not just copying. Research suggests that more complex induction-like circuits build on the basic induction head mechanism, composing it with other circuits to handle abstraction and generalization.

Conceitos relacionados

← Todos os termos
← Image Generation Inference →