Zubnet AIAprenderWiki › llama.cpp
Herramientas

llama.cpp

Una biblioteca open-source C/C++ para correr inferencia LLM en hardware de consumidor, creada por Georgi Gerganov. llama.cpp realiza inferencia cuantizada sin requerir CUDA, PyTorch o Python — corre en CPUs, Apple Silicon y GPUs de consumidor. Fue la primera herramienta en hacer accesible correr modelos grandes de lenguaje localmente para desarrolladores normales y entusiastas.

Por qué importa

llama.cpp inició la revolución de la IA local. Antes de ella, correr un modelo de lenguaje requería GPUs NVIDIA caras y setups Python complejos. llama.cpp mostró que modelos cuantizados podían correr en una MacBook o incluso una Raspberry Pi con calidad aceptable. Engendró todo un ecosistema (Ollama, LM Studio, kobold.cpp) e hizo «self-hosted AI» una opción real.

Deep Dive

Gerganov released llama.cpp in March 2023, days after Meta released LLaMA. The initial version could run LLaMA-7B on a MacBook using 4-bit quantization — something previously considered impractical. The project grew rapidly, adding support for dozens of architectures (Mistral, Qwen, Phi, Gemma, Command-R), multiple quantization methods (GGML, then GGUF), and hardware acceleration for Metal (Apple), Vulkan (cross-platform GPU), and CUDA (NVIDIA).

Why C++ Matters

The choice of C/C++ was deliberate: no Python runtime, no PyTorch dependency, minimal system requirements. This enables deployment on embedded systems, mobile devices, and servers without GPU infrastructure. The binary is self-contained — download the executable, download a GGUF model file, and you're running. This simplicity is what enabled the local AI ecosystem to grow so quickly.

Server Mode

llama.cpp includes a server mode that exposes an OpenAI-compatible API, making it a drop-in replacement for cloud APIs in development. Many developers use llama.cpp server locally for development and testing, switching to cloud APIs only for production. This keeps development costs near zero and avoids sending sensitive data to external services during development.

Conceptos relacionados

← Todos los términos
← Liquid AI Logits →