Zubnet AIAprenderWiki › Checkpoint
Training

Checkpoint

Model Checkpoint, Snapshot
Um snapshot salvo do estado de um modelo durante o treinamento — os pesos, estado do otimizador, schedule de learning rate e passo de treinamento. Checkpoints te permitem retomar o treinamento depois de interrupções (falha de hardware, preemption), avaliar versões intermediárias do modelo, e fazer rollback para uma versão anterior se o treinamento degrada. Salvar checkpoints a cada poucos milhares de passos é prática padrão.

Por que importa

Treinar modelos grandes leva dias a meses. Sem checkpoints, uma falha de GPU no passo 90.000 de um run de 100.000 passos significa começar de novo. Checkpoints são seguro: eles salvam progresso incrementalmente, então você só perde o trabalho desde o último checkpoint. Eles também habilitam seleção de modelo — às vezes um checkpoint anterior performa melhor nas suas métricas de avaliação que o final.

Deep Dive

A full checkpoint for a 70B model includes: model weights (~140 GB in FP16), optimizer states (~280 GB for Adam, which stores two moving averages per parameter), learning rate scheduler state, random number generator states, and the current training step. Total: ~420 GB per checkpoint. Saving this to disk takes significant time and storage, which is why checkpointing is done periodically rather than every step.

Checkpoint Strategies

Common strategies: save every N steps (simple but uses lots of storage), save only the K most recent checkpoints (deleting older ones to save space), save based on evaluation metrics (keep the checkpoint with the best validation loss), and use async checkpointing (save in the background while training continues on the next batch). Large training runs often use all of these: frequent local checkpoints on fast NVMe storage plus periodic remote checkpoints to network storage for disaster recovery.

Checkpoint Conversion

Different frameworks use different checkpoint formats: PyTorch's state_dict, Hugging Face's safetensors, FSDP's sharded checkpoints, and DeepSpeed's ZeRO checkpoints. Converting between formats is a common task — you might train with DeepSpeed (sharded across GPUs) but need a single consolidated checkpoint for inference or uploading to Hugging Face. The safetensors format is becoming the standard for sharing because it's fast to load and memory-safe.

Conceitos relacionados

← Todos os termos
← Chatbot Arena Classification →