Drawing Conclusions from Draws: Rethinking Preference Semantics in Arena-Style LLM Evaluation
Abstract
Ignoring rating updates for draws in arena-style evaluations of large language models improves battle outcome prediction accuracy by 1-3% across different rating systems.
In arena-style evaluation of large language models (LLMs), two LLMs respond to a user query, and the user chooses the winning response or deems the "battle" a draw, resulting in an adjustment to the ratings of both models. The prevailing approach for modeling these rating dynamics is to view battles as two-player game matches, as in chess, and apply the Elo rating system and its derivatives. In this paper, we critically examine this paradigm. Specifically, we question whether a draw genuinely means that the two models are equal and hence whether their ratings should be equalized. Instead, we conjecture that draws are more indicative of query difficulty: if the query is too easy, then both models are more likely to succeed equally. On three real-world arena datasets, we show that ignoring rating updates for draws yields a 1-3% relative increase in battle outcome prediction accuracy (which includes draws) for all four rating systems studied. Further analyses suggest that draws occur more for queries rated as very easy and those as highly objective, with risk ratios of 1.37 and 1.35, respectively. We recommend future rating systems to reconsider existing draw semantics and to account for query properties in rating updates.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Inclusion Arena: An Open Platform for Evaluating Large Foundation Models with Real-World Apps (2025)
- LLMsPark: A Benchmark for Evaluating Large Language Models in Strategic Gaming Contexts (2025)
- Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings (2025)
- REALM: Recursive Relevance Modeling for LLM-based Document Re-Ranking (2025)
- User-centric Subjective Leaderboard by Customizable Reward Modeling (2025)
- Model Consistency as a Cheap yet Predictive Proxy for LLM Elo Scores (2025)
- Can Large Models Fool the Eye? A New Turing Test for Biological Animation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper