Zubnet AI学习Wiki › Encoder-Decoder
Models

Encoder-Decoder

Seq2Seq, Sequence-to-Sequence
一种有两个独立部分的模型架构:encoder 读取并把输入压缩成一个表示,decoder 从这个表示生成输出。原版 Transformer 论文描述的就是 encoder-decoder。T5 和 BART 是 encoder-decoder 模型。相反,GPT、Claude、Llama 是 decoder-only(没有 encoder),BERT 是 encoder-only(没有 decoder)。

为什么重要

理解 encoder-decoder vs. decoder-only 能解释为什么不同模型擅长不同任务。Encoder-decoder 模型天然适合把一个序列转换成另一个的任务(翻译、摘要)。Decoder-only 模型更擅长开放式生成。整个领域在 LLM 上收敛到了 decoder-only,但 encoder-decoder 远没有死。

Deep Dive

In an encoder-decoder Transformer, the encoder processes the full input using bidirectional self-attention — every token can see every other token. This creates a rich representation of the input. The decoder then generates output tokens autoregressively, attending to both the previously generated tokens (via masked self-attention) and the encoder's representations (via cross-attention). This cross-attention is the bridge between understanding and generation.

Decoder-Only Won

Modern LLMs (GPT, Claude, Llama, Gemini) are all decoder-only: there's no separate encoder, and the model uses causal (left-to-right) attention throughout. Why did decoder-only win? Simplicity and scaling. Encoder-decoder requires two separate attention mechanisms and the architecture introduces questions about how to split capacity between encoder and decoder. Decoder-only is uniform and scales cleanly. It also handles both understanding and generation in one architecture by treating every task as text generation.

Encoder-Only: BERT's Legacy

Encoder-only models like BERT use bidirectional attention (every token sees all other tokens) and are trained with masked language modeling. They can't generate text, but they produce excellent representations for classification, NER, semantic similarity, and search. Most embedding models used in RAG pipelines are encoder-only. They're smaller, faster, and cheaper than LLMs for tasks that don't require generation.

相关概念

← 所有术语
← Emergence Endpoint →