Zubnet AI学习Wiki › Pooling
基础

Pooling

Max Pooling, Average Pooling
通过把一个区域总结成单个值来降低数据空间维度的操作。最大池化取每个区域的最大值。平均池化取均值。在 CNN 里,池化层在卷积层之间对特征图下采样。在 Transformer 里,池化把 token 表示合并成单个向量(比如用于分类)。

为什么重要

池化是神经网络从局部特征到全局理解的方式。一个 CNN 可能从 224×224 的特征图开始,到最后一层池化成 7×7,逐步总结空间信息。在 NLP 里,对 token embedding 做均值池化是从 token 表示序列创建单一句子 embedding 的标准方法。

Deep Dive

In CNNs: a 2×2 max pool with stride 2 takes every 2×2 region, keeps the maximum value, and reduces each spatial dimension by half. This achieves two things: translation invariance (small shifts in the input don't change the output) and dimensionality reduction (fewer values to process in subsequent layers). Average pooling does the same but takes the mean, which preserves more information but is less robust to noise.

Pooling in NLP

To create a fixed-size embedding from a variable-length sequence of tokens, you need to pool. Common strategies: [CLS] token pooling (use the representation of a special token, as in BERT), mean pooling (average all token representations — usually the best for sentence embeddings), max pooling (take the element-wise max across tokens), and weighted pooling (weight tokens by attention scores). Most embedding models use mean pooling for its simplicity and effectiveness.

Global Average Pooling

In modern vision architectures, global average pooling replaces the fully connected layers that older CNNs used for classification. Instead of flattening the final feature map into a vector (which creates millions of parameters), global average pooling averages each feature map channel to a single number. This produces a compact representation with no learned parameters, acting as a strong regularizer. Vision Transformers use a similar approach with the [CLS] token.

相关概念

← 所有术语
← PixVerse Positional Encoding →