Zubnet AIAprenderWiki › Encoder
Fundamentos

Encoder

Encoder Network, Feature Extractor
Um componente de rede neural que converte dados de entrada em uma representação comprimida e rica em informação (o encoding). Nos Transformers, o encoder usa atenção bidirecional para processar a entrada completa e produzir representações contextuais. Em autoencoders, o encoder comprime a entrada num gargalo latente. Em geração de imagens, o VAE encoder converte imagens para o espaço latente. Encoders são a metade “compreensão” de muitas arquiteturas.

Por que importa

Encoders estão em todo lugar: BERT é um encoder, CLIP tem um text encoder e um image encoder, Stable Diffusion tem um VAE encoder, sistemas RAG usam modelos encoder para embeddings. Entender o que um encoder faz — comprimir a entrada numa representação útil — te ajuda a entender todos esses sistemas. A qualidade do encoding determina a qualidade de tudo downstream.

Deep Dive

In a Transformer encoder (BERT, the left half of T5), every token attends to every other token bidirectionally. This means the representation of the word "bank" incorporates information from both "river" (left context) and "fishing" (right context) simultaneously. This bidirectional attention is why encoder representations are richer than decoder (left-to-right only) representations for understanding tasks.

Encoder vs. Decoder

The key distinction: encoders process input (understanding), decoders generate output (creation). Encoders see everything at once (bidirectional). Decoders see only past tokens (causal/left-to-right). This is why encoder models (BERT) are better for classification and search, while decoder models (GPT, Claude) are better for generation. Encoder-decoder models (T5, BART) use an encoder for input understanding and a decoder for output generation, connected by cross-attention.

Encoders in Multimodal Systems

Multimodal systems typically use separate encoders for each modality: a vision encoder (ViT) for images, a text encoder (BERT/CLIP) for text, and potentially audio encoders for speech. These produce embeddings in a shared space, enabling cross-modal understanding. The quality of each encoder determines how well the system understands that modality. This is why CLIP's training (aligning image and text encoders) was so impactful — it created a bridge between vision and language understanding.

Conceitos relacionados

← Todos os termos
ESC