Zubnet AILearnWiki › Weight Initialization
Training

Weight Initialization

Xavier Init, Kaiming Init, He Init
How neural network weights are set before training begins. Bad initialization can make training fail before it starts (vanishing or exploding activations). Good initialization ensures that activations and gradients maintain reasonable magnitudes across layers. Xavier initialization (for tanh/sigmoid) and Kaiming/He initialization (for ReLU) are the standards, each calibrated to the activation function.

Why it matters

Initialization seems like a minor detail but it's critical for training deep networks. A network with random (too large) initial weights produces exploding activations. One with too-small weights produces vanishing activations. Proper initialization puts the network in a "goldilocks zone" where signals flow through without exploding or vanishing — a prerequisite for gradient descent to work at all.

Deep Dive

The core principle: initialize weights so that the variance of activations is approximately constant across layers. If each layer amplifies the signal (variance grows), activations explode. If each layer diminishes it (variance shrinks), activations vanish. Xavier initialization sets weights to variance 2/(fan_in + fan_out). Kaiming initialization sets variance 2/fan_in, accounting for the fact that ReLU zeros out half the values.

In Transformers

Modern Transformers often use a scaled initialization: output projection weights in attention and FFN layers are initialized with standard deviation scaled by 1/√(2×num_layers). This prevents the residual stream from growing too large as contributions from many layers accumulate. GPT-2 and many subsequent models use this "scaled init" approach. Some architectures (like muP/maximal update parameterization) take this further with mathematically derived scaling rules.

Pre-Trained Weights

For most practical purposes, initialization from scratch is rare — you start from pre-trained weights and fine-tune. But initialization still matters for the new components: LoRA adapters, new classification heads, or extended vocabulary embeddings. Zero initialization for LoRA's B matrix (so the adapter starts as identity) and proper initialization for new token embeddings (typically copying the mean of existing embeddings) are common patterns that prevent the new components from disrupting the pre-trained model at the start of fine-tuning.

Related Concepts

← All Terms
ESC