Zubnet AIApprendreWiki › Induction Head
Fondamentaux

Induction Head

Un circuit spécifique à deux têtes d'attention découvert dans les Transformers qui implémente l'in-context learning par pattern matching. Si le modèle a vu le pattern « A B » plus tôt dans le contexte et voit maintenant « A » à nouveau, l'induction head prédit que « B » va suivre. Ce mécanisme simple est cru être un bloc de construction fondamental de comment les LLM apprennent des exemples dans leur contexte.

Pourquoi c'est important

Les induction heads sont le circuit le mieux compris en interprétabilité mécanistique — un exemple concret de comment les Transformers implémentent un algorithme utile à partir de poids appris. Ils expliquent pourquoi le prompting few-shot marche : quand tu donnes des exemples, les induction heads détectent le pattern et l'appliquent. Comprendre les induction heads fournit un fondement pour comprendre des comportements appris plus complexes.

Deep Dive

The circuit uses two heads across two layers. The first head (a "previous token head" in an earlier layer) copies information about which token preceded the current one. The second head (the actual "induction head" in a later layer) uses this information to complete patterns: if token B was preceded by A earlier in the context, and A appears again, the induction head boosts the prediction of B. This is a simple but powerful form of in-context learning.

Discovery and Verification

Olsson et al. (2022, Anthropic) identified induction heads through careful analysis of attention patterns in Transformers of various sizes. They observed a phase change during training: induction heads form suddenly, and their formation coincides with a dramatic improvement in the model's ability to do in-context learning. This suggests that induction heads are not just one of many circuits but a foundational capability that enables higher-level in-context learning.

Beyond Simple Patterns

Real-world in-context learning is more complex than "A B ... A → B." Models learn to generalize patterns: "capital of France is Paris, capital of Germany is Berlin, capital of Japan is..." requires understanding the abstract pattern, not just copying. Research suggests that more complex induction-like circuits build on the basic induction head mechanism, composing it with other circuits to handle abstraction and generalization.

Concepts liés

← Tous les termes
← Image Generation Inference →