Zubnet AIAprenderWiki › Batch Size & Epoch
Training

Batch Size & Epoch

Mini-Batch, Training Epoch
Batch size es cuántos ejemplos de entrenamiento procesa el modelo antes de actualizar sus parámetros. Un epoch es un pase completo a través de todo el dataset de entrenamiento. Un modelo entrenado por 3 epochs en 1 millón de ejemplos con batch size 1,000 procesa 1,000 ejemplos por update, toma 1,000 updates por epoch, y 3,000 updates en total.

Por qué importa

Batch size y epochs son los controles más fundamentales en entrenamiento. El batch size afecta velocidad de entrenamiento, uso de memoria, e incluso lo que el modelo aprende (batches pequeños añaden ruido que puede ayudar la generalización; batches grandes convergen más rápido pero pueden generalizar peor). El número de epochs determina cuántas veces el modelo ve cada ejemplo — muy pocos y underfit, demasiados y overfit.

Deep Dive

In practice, stochastic gradient descent processes the training data in random mini-batches. Each batch gives an estimate of the true gradient — larger batches give better estimates (less noise) but cost more memory and compute per step. Typical batch sizes range from 32 (small models, single GPU) to millions of tokens (LLM pre-training across thousands of GPUs).

The Large-Batch Training Challenge

LLM pre-training uses enormous effective batch sizes (millions of tokens per update) distributed across many GPUs. At this scale, the learning rate must be carefully tuned — the linear scaling rule (double the batch size, double the learning rate) works up to a point, then breaks down. Gradient accumulation lets you simulate large batches on smaller hardware by accumulating gradients across multiple forward passes before updating.

Epochs in the LLM Era

Modern LLM pre-training typically runs for less than one epoch on the full dataset — the data is so large that the model never sees all of it. This is a shift from classical ML where 10–100 epochs was normal. Research suggests that repeating data (multiple epochs) can actually hurt LLM performance due to memorization effects, though this depends on data quality. Fine-tuning, by contrast, typically runs for 1–5 epochs on a much smaller dataset.

Conceptos relacionados

← Todos los términos
← Backpropagation Beam Search →