DITING: A Multi-Agent Evaluation Framework for Benchmarking Web Novel Translation
Abstract
A new evaluation framework, DITING, and a reasoning-driven multi-agent evaluation framework, AgentEval, are introduced to assess the quality of web novel translations, revealing that Chinese-trained LLMs outperform larger foreign models.
Large language models (LLMs) have substantially advanced machine translation (MT), yet their effectiveness in translating web novels remains unclear. Existing benchmarks rely on surface-level metrics that fail to capture the distinctive traits of this genre. To address these gaps, we introduce DITING, the first comprehensive evaluation framework for web novel translation, assessing narrative and cultural fidelity across six dimensions: idiom translation, lexical ambiguity, terminology localization, tense consistency, zero-pronoun resolution, and cultural safety, supported by over 18K expert-annotated Chinese-English sentence pairs. We further propose AgentEval, a reasoning-driven multi-agent evaluation framework that simulates expert deliberation to assess translation quality beyond lexical overlap, achieving the highest correlation with human judgments among seven tested automatic metrics. To enable metric comparison, we develop MetricAlign, a meta-evaluation dataset of 300 sentence pairs annotated with error labels and scalar quality scores. Comprehensive evaluation of fourteen open, closed, and commercial models reveals that Chinese-trained LLMs surpass larger foreign counterparts, and that DeepSeek-V3 delivers the most faithful and stylistically coherent translations. Our work establishes a new paradigm for exploring LLM-based web novel translation and provides public resources to advance future research.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Revisiting Metric Reliability for Fine-grained Evaluation of Machine Translation and Summarization in Indian Languages (2025)
- Ready to Translate, Not to Represent? Bias and Performance Gaps in Multilingual LLMs Across Language Families and Domains (2025)
- Long-context Reference-based MT Quality Estimation (2025)
- Evaluating Language Translation Models by Playing Telephone (2025)
- XLQA: A Benchmark for Locale-Aware Multilingual Open-Domain Question Answering (2025)
- Extending Automatic Machine Translation Evaluation to Book-Length Documents (2025)
- TRUEBench: Can LLM Response Meet Real-world Constraints as Productivity Assistant? (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
