Zubnet AILearnWiki › Throughput
Infrastructure

Throughput

Tokens Per Second, TPS
The total number of tokens a system can generate per second across all concurrent requests. Distinct from latency (how fast a single request is served). A system with high throughput serves many users simultaneously. A system with low latency serves each individual user quickly. The two often trade off against each other.

Why it matters

When building AI products, throughput determines your serving costs and capacity. A system that generates 100 tokens/second per user but can only serve one user at a time has low throughput even though individual latency is great. Throughput is what you optimize when you're paying GPU bills for thousands of concurrent users.

Deep Dive

The distinction matters most in production. Latency (particularly TTFT — time to first token) determines user experience for a single request. Throughput determines how many users you can serve with a given number of GPUs. Techniques that improve one often hurt the other: batching many requests together improves throughput (the GPU stays busy) but increases latency (each request waits for the batch).

Continuous Batching

The breakthrough in LLM serving was continuous batching (also called in-flight batching). Instead of waiting for all requests in a batch to finish before starting new ones, continuous batching adds new requests to the batch as slots open up. This keeps GPU utilization high and prevents short requests from being held up by long ones. vLLM, TGI, and TensorRT-LLM all implement this.

The Economics

At scale, throughput directly determines cost per token. A server generating 10,000 tokens/second at $10/hour costs $0.001 per 1,000 tokens. The same server at 1,000 tokens/second costs $0.01. This 10x difference is why inference optimization (quantization, speculative decoding, better batching) matters so much — it's not just faster, it's cheaper. Providers who optimize throughput can offer lower prices or higher margins.

Related Concepts

← All Terms
← Text-to-Speech Together AI →