Zubnet AIAprenderWiki › Normalization
Training

Normalization

LayerNorm, RMSNorm, BatchNorm
Técnicas que estabilizam o treinamento de redes neurais normalizando os valores que fluem pela rede para ter escala consistente. Layer Normalization (LayerNorm) normaliza por features dentro de cada exemplo. RMSNorm é uma variante simplificada. Batch Normalization (BatchNorm) normaliza através do batch. Todo Transformer usa alguma forma de normalização entre suas camadas.

Por que importa

Sem normalização, redes profundas são extremamente difíceis de treinar — ativações podem explodir ou desvanecer através das camadas, tornando gradient descent instável. Normalização é uma daquelas técnicas pouco glamorosas que são absolutamente essenciais: remova-a de qualquer arquitetura moderna e o treinamento colapsa.

Deep Dive

LayerNorm (Ba et al., 2016) computes the mean and variance of all activations within a single training example and normalizes them to zero mean and unit variance, then applies learned scale and shift parameters. This ensures that regardless of the input magnitude, each layer receives inputs with a consistent distribution. It's the standard in Transformers.

RMSNorm: The Modern Default

RMSNorm (Zhang & Sennrich, 2019) simplifies LayerNorm by removing the mean centering and only normalizing by the root mean square: x / sqrt(mean(x²)). This is computationally cheaper (no need to compute mean for centering) and performs comparably. LLaMA, Mistral, and most modern LLMs use RMSNorm instead of LayerNorm.

Pre-Norm vs. Post-Norm

The original Transformer placed normalization after the attention/feed-forward block (post-norm). Modern architectures almost universally use pre-norm: normalize the input before passing it through the block, then add the residual. Pre-norm is more stable during training (especially at large scale) and allows training without learning rate warmup. This seemingly minor architectural choice has a significant impact on training stability.

Conceitos relacionados

← Todos os termos
← Neuron NVIDIA →