The distinction matters most in production. Latency (particularly TTFT — time to first token) determines user experience for a single request. Throughput determines how many users you can serve with a given number of GPUs. Techniques that improve one often hurt the other: batching many requests together improves throughput (the GPU stays busy) but increases latency (each request waits for the batch).
The breakthrough in LLM serving was continuous batching (also called in-flight batching). Instead of waiting for all requests in a batch to finish before starting new ones, continuous batching adds new requests to the batch as slots open up. This keeps GPU utilization high and prevents short requests from being held up by long ones. vLLM, TGI, and TensorRT-LLM all implement this.
At scale, throughput directly determines cost per token. A server generating 10,000 tokens/second at $10/hour costs $0.001 per 1,000 tokens. The same server at 1,000 tokens/second costs $0.01. This 10x difference is why inference optimization (quantization, speculative decoding, better batching) matters so much — it's not just faster, it's cheaper. Providers who optimize throughput can offer lower prices or higher margins.