Zubnet AI学习Wiki › Weight Initialization
Training

Weight Initialization

Xavier Init, Kaiming Init, He Init
神经网络权重在训练开始前如何设置。糟糕的初始化能让训练在开始前就失败(激活消失或爆炸)。好的初始化确保激活和梯度跨层维持合理量级。Xavier 初始化(用于 tanh/sigmoid)和 Kaiming/He 初始化(用于 ReLU)是标准,每个都为激活函数校准。

为什么重要

初始化看似小细节,但对训练深度网络至关重要。随机(太大)初始权重的网络产生爆炸的激活。权重太小的产生消失的激活。合适的初始化把网络放在“金发姑娘区”,信号流通不爆炸也不消失 — 这是梯度下降能工作的先决条件。

Deep Dive

The core principle: initialize weights so that the variance of activations is approximately constant across layers. If each layer amplifies the signal (variance grows), activations explode. If each layer diminishes it (variance shrinks), activations vanish. Xavier initialization sets weights to variance 2/(fan_in + fan_out). Kaiming initialization sets variance 2/fan_in, accounting for the fact that ReLU zeros out half the values.

In Transformers

Modern Transformers often use a scaled initialization: output projection weights in attention and FFN layers are initialized with standard deviation scaled by 1/√(2×num_layers). This prevents the residual stream from growing too large as contributions from many layers accumulate. GPT-2 and many subsequent models use this "scaled init" approach. Some architectures (like muP/maximal update parameterization) take this further with mathematically derived scaling rules.

Pre-Trained Weights

For most practical purposes, initialization from scratch is rare — you start from pre-trained weights and fine-tune. But initialization still matters for the new components: LoRA adapters, new classification heads, or extended vocabulary embeddings. Zero initialization for LoRA's B matrix (so the adapter starts as identity) and proper initialization for new token embeddings (typically copying the mean of existing embeddings) are common patterns that prevent the new components from disrupting the pre-trained model at the start of fine-tuning.

相关概念

← 所有术语
ESC