Zubnet AI学习Wiki › Positional Encoding
基础

Positional Encoding

Positional Embedding, RoPE, ALiBi
告诉 Transformer 模型序列中 token 顺序的机制。不像 RNN 按顺序处理 token(所以位置是隐含的),Transformer 并行处理所有 token,没有内在的顺序感。位置编码注入位置信息,让模型知道 “dog bites man” 和 “man bites dog” 是不同的。

为什么重要

没有位置信息,Transformer 把一句话当作词袋 — 词序丢失。位置编码的选择也决定了一个模型处理比训练时见过的更长的序列有多好,这就是 RoPE、ALiBi 这类技术对长上下文模型至关重要的原因。

Deep Dive

The original Transformer (2017) used fixed sinusoidal functions at different frequencies for each position and dimension. These had a nice theoretical property: the model could learn to attend to relative positions because the sinusoidal patterns create consistent offsets. But learned positional embeddings (a trainable vector for each position) quickly became the default because they performed slightly better, despite being limited to the maximum training length.

RoPE: The Modern Standard

Rotary Position Embeddings (RoPE, Su et al., 2021) encode position by rotating the query and key vectors in the attention mechanism. The angle of rotation depends on position, so the dot product between two tokens naturally encodes their relative distance. RoPE is used by LLaMA, Mistral, Qwen, and most modern LLMs. Its key advantage: it enables length extrapolation — models can handle sequences somewhat longer than those seen during training, especially when combined with techniques like YaRN or NTK-aware scaling.

ALiBi and Beyond

ALiBi (Attention with Linear Biases) takes a simpler approach: instead of modifying embeddings, it adds a linear penalty to attention scores based on distance between tokens. Farther tokens get penalized more. This requires no learned parameters and extrapolates well to longer sequences. Some architectures combine approaches or use relative position biases. The trend is toward methods that generalize beyond the training length, since context windows keep growing.

相关概念

← 所有术语
← Pooling Pre-training →