Zubnet AI学习Wiki › GGUF
基础设施

GGUF

GGML Unified Format
通过 llama.cpp、Ollama、以及其他本地推理工具在本地运行量化语言模型的标准文件格式。GGUF 文件包含量化格式的模型权重(把精度从 16-bit 降到 4-bit 或 8-bit),以及词表、架构详细信息、量化参数等元数据 — 加载并运行模型所需的一切都在单个文件里。

为什么重要

GGUF 是让本地 AI 变实用的格式。在它之前,本地跑模型需要 PyTorch、CUDA、特定 GPU 内存的复杂配置。GGUF 把一切打包成一个文件,llama.cpp 或 Ollama 能直接加载 — 在 CPU、Apple Silicon、游戏 GPU 上,任何地方。如果你在 Hugging Face 看到文件名像“Q4_K_M.gguf”的模型,那就是一个本地可用的模型。

Deep Dive

GGUF succeeded GGML (the original format), adding a more extensible metadata system and support for new quantization types. A typical model release includes multiple GGUF variants at different quantization levels: Q2_K (smallest, lowest quality), Q4_K_M (popular sweet spot), Q5_K_M (better quality, larger), Q6_K, Q8_0 (near-original quality, largest). The naming convention tells you the bit-width and quantization method.

Quantization Variants

The "K" in Q4_K_M refers to k-quant methods that use different bit-widths for different layers based on their sensitivity — attention layers might get higher precision than feed-forward layers. The "M" means "medium" (between "S" for small/aggressive and "L" for large/conservative). Q4_K_M typically preserves 95%+ of the original model quality while reducing file size by 4x compared to FP16. For most users, Q4_K_M or Q5_K_M is the right choice.

The Ecosystem

GGUF has become the lingua franca of local AI. Community members quantize new models to GGUF within hours of release and upload them to Hugging Face. 工具 like llama.cpp, Ollama, LM Studio, GPT4All, and kobold.cpp all support GGUF natively. This ecosystem is why you can download a 70B model at 4-bit quantization (about 40 GB) and run it on a MacBook Pro with 64 GB RAM in under a minute from download to first response.

相关概念

← 所有术语
← Generative AI GNN →