The system works like competitive matchmaking: users submit prompts, two anonymous models respond, and the user picks a winner (or declares a tie). Over hundreds of thousands of votes, ELO ratings stabilize to reflect genuine quality differences. The anonymity is crucial — users judge the response, not the brand. Models are periodically added and removed as new versions launch.
ELO ratings provide a single number per model, which is useful for quick comparison but obscures important details. A model might be better at coding but worse at creative writing; ELO averages these out. The Arena introduced category-specific ratings (coding, math, creative writing, instruction following) to address this. ELO also requires many votes to stabilize — a new model needs thousands of comparisons before its rating is reliable.
Arena voting has known biases: users tend to prefer longer responses, responses with formatting (bullet points, headers), and responses that are more confident (even if wrong). Some labs have been suspected of optimizing for Arena-style preferences rather than genuine quality. The LMSYS team works to mitigate these biases through statistical methods and by increasing vote volume, but they're inherent to any preference-based evaluation.