Zubnet AIसीखेंWiki › GGUF
Infrastructure

GGUF

GGML Unified Format
llama.cpp, Ollama, और दूसरे local inference tools के through quantized language models locally run करने के लिए standard file format। GGUF files quantized format में model weights contain करती हैं (precision को 16-bit से 4-bit या 8-bit तक reduce करते हुए), vocabulary, architecture details, और quantization parameters जैसे metadata के साथ — एक single file में model को load और run करने के लिए सब कुछ।

यह क्यों matter करता है

GGUF वो format है जिसने local AI को practical बनाया। इसके पहले, locally models run करने के लिए PyTorch, CUDA, और specific GPU memory के साथ complex setups चाहिए होते थे। GGUF सब कुछ एक file में package करता है जिसे llama.cpp या Ollama directly load कर सके — CPU पर, Apple Silicon पर, gaming GPUs पर, कहीं भी। अगर आप Hugging Face पर “Q4_K_M.gguf” जैसे filenames वाला model देखते हैं, वो local use के लिए ready model है।

Deep Dive

GGUF succeeded GGML (the original format), adding a more extensible metadata system and support for new quantization types. A typical model release includes multiple GGUF variants at different quantization levels: Q2_K (smallest, lowest quality), Q4_K_M (popular sweet spot), Q5_K_M (better quality, larger), Q6_K, Q8_0 (near-original quality, largest). The naming convention tells you the bit-width and quantization method.

Quantization Variants

The "K" in Q4_K_M refers to k-quant methods that use different bit-widths for different layers based on their sensitivity — attention layers might get higher precision than feed-forward layers. The "M" means "medium" (between "S" for small/aggressive and "L" for large/conservative). Q4_K_M typically preserves 95%+ of the original model quality while reducing file size by 4x compared to FP16. For most users, Q4_K_M or Q5_K_M is the right choice.

The Ecosystem

GGUF has become the lingua franca of local AI. Community members quantize new models to GGUF within hours of release and upload them to Hugging Face. Tools like llama.cpp, Ollama, LM Studio, GPT4All, and kobold.cpp all support GGUF natively. This ecosystem is why you can download a 70B model at 4-bit quantization (about 40 GB) and run it on a MacBook Pro with 64 GB RAM in under a minute from download to first response.

संबंधित अवधारणाएँ

← सभी Terms
← Generative AI GNN →