Zubnet AIAprenderWiki › Gradient Descent
Training

Gradient Descent

SGD, Stochastic Gradient Descent, Backpropagation
El algoritmo que entrena redes neuronales ajustando iterativamente los parámetros para reducir la función de pérdida. Funciona calculando el gradiente (la dirección de mayor incremento) de la pérdida con respecto a cada parámetro, y luego moviendo cada parámetro un pequeño paso en la dirección opuesta (hacia abajo). La backpropagation es la técnica que se usa para calcular eficientemente estos gradientes a través de las capas de la red.

Por qué importa

Gradient descent es el motor bajo el capó de todo el deep learning. Cada modelo que usas — cada LLM, cada generador de imágenes, cada modelo de embedding — fue entrenado con gradient descent. Entenderlo te ayuda a entender la dinámica del entrenamiento: por qué importa el learning rate, por qué el entrenamiento puede divergir o estancarse, y por qué optimizadores modernos como Adam funcionan mejor que gradient descent ingenuo.

Deep Dive

The full algorithm: (1) take a batch of training examples, (2) run them through the model to get predictions, (3) compute the loss, (4) use backpropagation to compute the gradient of the loss with respect to every parameter, (5) update each parameter by subtracting the gradient times a learning rate, (6) repeat. In practice, "stochastic" gradient descent (SGD) uses random mini-batches rather than the full dataset, which is both computationally necessary (the full dataset doesn't fit in memory) and beneficial (the noise from random batches helps escape local minima).

Adam and Modern Optimizers

Plain SGD is rarely used today. Adam (Adaptive Moment Estimation) maintains a running average of both the gradient and its squared magnitude for each parameter, effectively giving each parameter its own adaptive learning rate. Parameters with consistently large gradients get smaller updates (they're already well-calibrated), while parameters with small, noisy gradients get larger updates (they need more aggressive movement). AdamW adds weight decay for regularization. Most LLM training uses AdamW or variants.

The Aprendering Rate

The learning rate is arguably the single most important hyperparameter in training. Too high and the model overshoots the minimum, loss diverges, and training fails. Too low and training takes forever or gets stuck. Modern training uses learning rate schedules: start with a warmup phase (gradually increasing from near-zero), reach a peak, then decay (cosine annealing is common). The peak learning rate, warmup duration, and decay schedule all interact with batch size and model architecture. Getting this right is a significant part of training large models.

Conceptos relacionados

← Todos los términos
← Gradient Checkpointing Groq →