Zubnet AIAprenderWiki › llama.cpp
Ferramentas

llama.cpp

Uma biblioteca open-source C/C++ para rodar inferência LLM em hardware de consumidor, criada por Georgi Gerganov. llama.cpp realiza inferência quantizada sem exigir CUDA, PyTorch ou Python — roda em CPUs, Apple Silicon e GPUs de consumidor. Foi a primeira ferramenta a tornar acessível rodar grandes modelos de linguagem localmente para desenvolvedores normais e entusiastas.

Por que importa

llama.cpp iniciou a revolução da IA local. Antes dela, rodar um modelo de linguagem exigia GPUs NVIDIA caras e setups Python complexos. llama.cpp mostrou que modelos quantizados podiam rodar em um MacBook ou mesmo em uma Raspberry Pi com qualidade aceitável. Gerou um ecossistema inteiro (Ollama, LM Studio, kobold.cpp) e tornou “self-hosted AI” uma opção real.

Deep Dive

Gerganov released llama.cpp in March 2023, days after Meta released LLaMA. The initial version could run LLaMA-7B on a MacBook using 4-bit quantization — something previously considered impractical. The project grew rapidly, adding support for dozens of architectures (Mistral, Qwen, Phi, Gemma, Command-R), multiple quantization methods (GGML, then GGUF), and hardware acceleration for Metal (Apple), Vulkan (cross-platform GPU), and CUDA (NVIDIA).

Why C++ Matters

The choice of C/C++ was deliberate: no Python runtime, no PyTorch dependency, minimal system requirements. This enables deployment on embedded systems, mobile devices, and servers without GPU infrastructure. The binary is self-contained — download the executable, download a GGUF model file, and you're running. This simplicity is what enabled the local AI ecosystem to grow so quickly.

Server Mode

llama.cpp includes a server mode that exposes an OpenAI-compatible API, making it a drop-in replacement for cloud APIs in development. Many developers use llama.cpp server locally for development and testing, switching to cloud APIs only for production. This keeps development costs near zero and avoids sending sensitive data to external services during development.

Conceitos relacionados

← Todos os termos
← Liquid AI Logits →