Zubnet AIसीखेंWiki › Weight Initialization
Training

Weight Initialization

Xavier Init, Kaiming Init, He Init
Training शुरू होने से पहले neural network weights कैसे set होते हैं। Bad initialization training को शुरू होने से पहले fail करवा सकती है (vanishing या exploding activations)。 Good initialization ensure करती है कि activations और gradients layers के across reasonable magnitudes maintain करें। Xavier initialization (tanh/sigmoid के लिए) और Kaiming/He initialization (ReLU के लिए) standards हैं, हर एक activation function के लिए calibrated।

यह क्यों matter करता है

Initialization एक minor detail लगती है लेकिन deep networks train करने के लिए critical है। Random (too large) initial weights वाला एक network exploding activations produce करता है। Too-small weights वाला एक vanishing activations produce करता है। Proper initialization network को एक “goldilocks zone” में रखती है जहाँ signals बिना explode या vanish हुए flow करते हैं — gradient descent को बिल्कुल काम करने के लिए prerequisite।

Deep Dive

The core principle: initialize weights so that the variance of activations is approximately constant across layers. If each layer amplifies the signal (variance grows), activations explode. If each layer diminishes it (variance shrinks), activations vanish. Xavier initialization sets weights to variance 2/(fan_in + fan_out). Kaiming initialization sets variance 2/fan_in, accounting for the fact that ReLU zeros out half the values.

In Transformers

Modern Transformers often use a scaled initialization: output projection weights in attention and FFN layers are initialized with standard deviation scaled by 1/√(2×num_layers). This prevents the residual stream from growing too large as contributions from many layers accumulate. GPT-2 and many subsequent models use this "scaled init" approach. Some architectures (like muP/maximal update parameterization) take this further with mathematically derived scaling rules.

Pre-Trained Weights

For most practical purposes, initialization from scratch is rare — you start from pre-trained weights and fine-tune. But initialization still matters for the new components: LoRA adapters, new classification heads, or extended vocabulary embeddings. Zero initialization for LoRA's B matrix (so the adapter starts as identity) and proper initialization for new token embeddings (typically copying the mean of existing embeddings) are common patterns that prevent the new components from disrupting the pre-trained model at the start of fine-tuning.

संबंधित अवधारणाएँ

← सभी Terms
ESC