Zubnet AILearnWiki › llama.cpp
Tools

llama.cpp

An open-source C/C++ library for running LLM inference on consumer hardware, created by Georgi Gerganov. llama.cpp performs quantized inference without requiring CUDA, PyTorch, or Python — it runs on CPUs, Apple Silicon, and consumer GPUs. It was the first tool to make running large language models locally accessible to normal developers and enthusiasts.

Why it matters

llama.cpp started the local AI revolution. Before it, running a language model required expensive NVIDIA GPUs and complex Python setups. llama.cpp showed that quantized models could run on a MacBook or even a Raspberry Pi with acceptable quality. It spawned an entire ecosystem (Ollama, LM Studio, kobold.cpp) and made "self-hosted AI" a real option.

Deep Dive

Gerganov released llama.cpp in March 2023, days after Meta released LLaMA. The initial version could run LLaMA-7B on a MacBook using 4-bit quantization — something previously considered impractical. The project grew rapidly, adding support for dozens of architectures (Mistral, Qwen, Phi, Gemma, Command-R), multiple quantization methods (GGML, then GGUF), and hardware acceleration for Metal (Apple), Vulkan (cross-platform GPU), and CUDA (NVIDIA).

Why C++ Matters

The choice of C/C++ was deliberate: no Python runtime, no PyTorch dependency, minimal system requirements. This enables deployment on embedded systems, mobile devices, and servers without GPU infrastructure. The binary is self-contained — download the executable, download a GGUF model file, and you're running. This simplicity is what enabled the local AI ecosystem to grow so quickly.

Server Mode

llama.cpp includes a server mode that exposes an OpenAI-compatible API, making it a drop-in replacement for cloud APIs in development. Many developers use llama.cpp server locally for development and testing, switching to cloud APIs only for production. This keeps development costs near zero and avoids sending sensitive data to external services during development.

Related Concepts

← All Terms
← Liquid AI Logits →