Zubnet AIAprenderWiki › Throughput
Infraestructura

Throughput

Tokens Per Second, TPS
El número total de tokens que un sistema puede generar por segundo a través de todas las requests concurrentes. Distinto de latencia (qué tan rápido se sirve una sola request). Un sistema de alto throughput sirve a muchos usuarios simultáneamente. Un sistema de baja latencia sirve a cada usuario individual rápidamente. Los dos a menudo se intercambian uno contra otro.

Por qué importa

Al construir productos IA, el throughput determina tus costos de serving y capacidad. Un sistema que genera 100 tokens/segundo por usuario pero solo puede servir a un usuario a la vez tiene bajo throughput aunque la latencia individual sea genial. Throughput es lo que optimizas cuando estás pagando cuentas GPU por miles de usuarios concurrentes.

Deep Dive

The distinction matters most in production. Latency (particularly TTFT — time to first token) determines user experience for a single request. Throughput determines how many users you can serve with a given number of GPUs. Techniques that improve one often hurt the other: batching many requests together improves throughput (the GPU stays busy) but increases latency (each request waits for the batch).

Continuous Batching

The breakthrough in LLM serving was continuous batching (also called in-flight batching). Instead of waiting for all requests in a batch to finish before starting new ones, continuous batching adds new requests to the batch as slots open up. This keeps GPU utilization high and prevents short requests from being held up by long ones. vLLM, TGI, and TensorRT-LLM all implement this.

The Economics

At scale, throughput directly determines cost per token. A server generating 10,000 tokens/second at $10/hour costs $0.001 per 1,000 tokens. The same server at 1,000 tokens/second costs $0.01. This 10x difference is why inference optimization (quantization, speculative decoding, better batching) matters so much — it's not just faster, it's cheaper. Providers who optimize throughput can offer lower prices or higher margins.

Conceptos relacionados

← Todos los términos
← Text-to-Speech Together AI →