Zubnet AIAprenderWiki › Encoder-Decoder
Models

Encoder-Decoder

Seq2Seq, Sequence-to-Sequence
Uma arquitetura de modelo com duas partes distintas: um encoder que lê e comprime a entrada em uma representação, e um decoder que gera a saída a partir dessa representação. O paper original do Transformer descrevia um encoder-decoder. T5 e BART são modelos encoder-decoder. Em contraste, GPT, Claude e Llama são decoder-only (sem encoder), e BERT é encoder-only (sem decoder).

Por que importa

Entender encoder-decoder vs. decoder-only explica por que modelos diferentes se destacam em tarefas diferentes. Modelos encoder-decoder são naturalmente bons em tarefas onde você transforma uma sequência em outra (tradução, sumarização). Modelos decoder-only são melhores em geração aberta. O campo inteiro convergiu para decoder-only em LLMs, mas encoder-decoder está longe de morto.

Deep Dive

In an encoder-decoder Transformer, the encoder processes the full input using bidirectional self-attention — every token can see every other token. This creates a rich representation of the input. The decoder then generates output tokens autoregressively, attending to both the previously generated tokens (via masked self-attention) and the encoder's representations (via cross-attention). This cross-attention is the bridge between understanding and generation.

Decoder-Only Won

Modern LLMs (GPT, Claude, Llama, Gemini) are all decoder-only: there's no separate encoder, and the model uses causal (left-to-right) attention throughout. Why did decoder-only win? Simplicity and scaling. Encoder-decoder requires two separate attention mechanisms and the architecture introduces questions about how to split capacity between encoder and decoder. Decoder-only is uniform and scales cleanly. It also handles both understanding and generation in one architecture by treating every task as text generation.

Encoder-Only: BERT's Legacy

Encoder-only models like BERT use bidirectional attention (every token sees all other tokens) and are trained with masked language modeling. They can't generate text, but they produce excellent representations for classification, NER, semantic similarity, and search. Most embedding models used in RAG pipelines are encoder-only. They're smaller, faster, and cheaper than LLMs for tasks that don't require generation.

Conceitos relacionados

← Todos os termos
← Emergence Endpoint →