Zubnet AI学习Wiki › Encoder
基础

Encoder

Encoder Network, Feature Extractor
一个神经网络组件,把输入数据转换成压缩的、信息丰富的表示(编码)。在 Transformer 中,encoder 使用双向 attention 处理完整输入,产生带上下文的表示。在 autoencoder 中,encoder 把输入压缩到一个潜在瓶颈中。在图像生成中,VAE encoder 把图像转换到潜在空间。Encoder 是很多架构中“理解”那一半。

为什么重要

Encoder 无处不在:BERT 是一个 encoder,CLIP 有一个 text encoder 和一个 image encoder,Stable Diffusion 有一个 VAE encoder,RAG 系统用 encoder 模型做 embeddings。理解 encoder 做什么 — 把输入压缩成一个有用的表示 — 能帮你理解所有这些系统。编码的质量决定下游一切的质量。

Deep Dive

In a Transformer encoder (BERT, the left half of T5), every token attends to every other token bidirectionally. This means the representation of the word "bank" incorporates information from both "river" (left context) and "fishing" (right context) simultaneously. This bidirectional attention is why encoder representations are richer than decoder (left-to-right only) representations for understanding tasks.

Encoder vs. Decoder

The key distinction: encoders process input (understanding), decoders generate output (creation). Encoders see everything at once (bidirectional). Decoders see only past tokens (causal/left-to-right). This is why encoder models (BERT) are better for classification and search, while decoder models (GPT, Claude) are better for generation. Encoder-decoder models (T5, BART) use an encoder for input understanding and a decoder for output generation, connected by cross-attention.

Encoders in Multimodal Systems

Multimodal systems typically use separate encoders for each modality: a vision encoder (ViT) for images, a text encoder (BERT/CLIP) for text, and potentially audio encoders for speech. These produce embeddings in a shared space, enabling cross-modal understanding. The quality of each encoder determines how well the system understands that modality. This is why CLIP's training (aligning image and text encoders) was so impactful — it created a bridge between vision and language understanding.

相关概念

← 所有术语
ESC