Zubnet AIसीखेंWiki › Mixed Precision Training
Training

Mixed Precision Training

FP16, BF16, Half Precision
Neural networks को अधिकांश computations के लिए lower-precision number formats (32-bit के बजाय 16-bit) use करके train करना, जबकि critical operations को full precision में रखते हुए। ये GPUs की effective memory capacity और computation speed को double करता है, model quality पर minimal impact के साथ। BF16 (bfloat16) LLM training के लिए standard है; FP16 inference के लिए use होता है।

यह क्यों matter करता है

Mixed precision ही वजह है कि हम इतने बड़े models train कर सकते हैं जितने train करते हैं। FP32 में एक 70B parameter model को सिर्फ weights के लिए 280 GB चाहिए होते — किसी भी single GPU पर impossible। BF16 में, इसे 140 GB चाहिए, जो कुछ GPUs के across fit होता है। Mixed precision ने effectively AI industry की compute capacity को free में double कर दिया, बस एक smarter number format use करके।

Deep Dive

The key insight: most neural network computations don't need 32 bits of precision. The weights, activations, and gradients can be represented in 16 bits without meaningful quality loss. But some operations (loss computation, weight updates) need higher precision to avoid numerical instability. Mixed precision keeps a master copy of weights in FP32 for updates, while using FP16/BF16 for the forward and backward passes.

BF16 vs. FP16

FP16 (IEEE half-precision) has 5 exponent bits and 10 mantissa bits. BF16 (Brain Float 16) has 8 exponent bits and 7 mantissa bits. BF16's wider exponent range means it can represent the same range of values as FP32 (avoiding overflow/underflow), while FP16's narrower range requires loss scaling to prevent gradients from underflowing to zero. For training, BF16 is simpler and more stable. For inference, FP16 sometimes offers slightly better precision for the same memory cost.

FP8 and Beyond

The latest GPUs (NVIDIA H100, H200) support FP8 (8-bit floating point) for even faster computation. FP8 halves memory and doubles throughput compared to FP16, but requires careful handling to avoid quality degradation. Current practice: train in BF16, serve in FP16 or FP8, and quantize to INT4/INT8 for edge deployment. Each step down in precision trades a tiny amount of quality for significant speed and memory gains.

संबंधित अवधारणाएँ

← सभी Terms
← Mistral AI Mixture of Experts →