Zubnet AIAprenderWiki › Gradient Descent
Training

Gradient Descent

SGD, Stochastic Gradient Descent, Backpropagation
O algoritmo que treina redes neurais ajustando iterativamente os parâmetros para reduzir a função de perda. Funciona computando o gradiente (a direção de maior aumento) da perda com respeito a cada parâmetro, e depois movendo cada parâmetro um pequeno passo na direção oposta (para baixo). Backpropagation é a técnica usada para computar eficientemente esses gradientes através das camadas da rede.

Por que importa

Gradient descent é o motor por baixo do capô de todo deep learning. Cada modelo que você usa — cada LLM, cada gerador de imagens, cada modelo de embedding — foi treinado com gradient descent. Entendê-lo te ajuda a entender dinâmica de treinamento: por que learning rate importa, por que treinamento pode divergir ou empacar, e por que otimizadores modernos como Adam funcionam melhor que gradient descent ingênuo.

Deep Dive

The full algorithm: (1) take a batch of training examples, (2) run them through the model to get predictions, (3) compute the loss, (4) use backpropagation to compute the gradient of the loss with respect to every parameter, (5) update each parameter by subtracting the gradient times a learning rate, (6) repeat. In practice, "stochastic" gradient descent (SGD) uses random mini-batches rather than the full dataset, which is both computationally necessary (the full dataset doesn't fit in memory) and beneficial (the noise from random batches helps escape local minima).

Adam and Modern Optimizers

Plain SGD is rarely used today. Adam (Adaptive Moment Estimation) maintains a running average of both the gradient and its squared magnitude for each parameter, effectively giving each parameter its own adaptive learning rate. Parameters with consistently large gradients get smaller updates (they're already well-calibrated), while parameters with small, noisy gradients get larger updates (they need more aggressive movement). AdamW adds weight decay for regularization. Most LLM training uses AdamW or variants.

The Aprendering Rate

The learning rate is arguably the single most important hyperparameter in training. Too high and the model overshoots the minimum, loss diverges, and training fails. Too low and training takes forever or gets stuck. Modern training uses learning rate schedules: start with a warmup phase (gradually increasing from near-zero), reach a peak, then decay (cosine annealing is common). The peak learning rate, warmup duration, and decay schedule all interact with batch size and model architecture. Getting this right is a significant part of training large models.

Conceitos relacionados

← Todos os termos
← Gradient Checkpointing Groq →