Abstract
AdaR framework enhances LLMs' robustness and generalization in mathematical reasoning by synthesizing logically equivalent queries and using RLVR to penalize spurious logic.
Mathematical reasoning is a primary indicator of large language models (LLMs) intelligence. However, existing LLMs exhibit failures of robustness and generalization. This paper attributes these deficiencies to spurious reasoning, i.e., producing answers from superficial features. To address this challenge, we propose the AdaR framework to enable adaptive reasoning, wherein models rely on problem-solving logic to produce answers. AdaR synthesizes logically equivalent queries by varying variable values, and trains models with RLVR on these data to penalize spurious logic while encouraging adaptive logic. To improve data quality, we extract the problem-solving logic from the original query and generate the corresponding answer by code execution, then apply a sanity check. Experimental results demonstrate that AdaR improves robustness and generalization, achieving substantial improvement in mathematical reasoning while maintaining high data efficiency. Analysis indicates that data synthesis and RLVR function in a coordinated manner to enable adaptive reasoning in LLMs. Subsequent analyses derive key design insights into the effect of critical factors and the applicability to instruct LLMs. Our project is available at https://github.com/LaiZhejian/AdaR
Community
๐ฑ Overview
Large Language Models (LLMs) have shown impressive reasoning capabilities, yet they often rely on spurious reasoning โ producing answers from superficial features, leading to failure at robustness and generalization.
We propose AdaR framework to enable adaptive reasoning, wherein models rely on problem-solving logic to produce answers. AdaR synthesizes logically equivalent queries by varying variable values, and trains models with RLVR on these data to penalize spurious logic while encouraging adaptive logic.
The framework integrates data synthesis and RLVR training to enhance both robustness (in-domain) and generalization (out-of-domain).
Figure 1.
Subfigure I: Three reasoning modes โ direct inference (black), spurious reasoning (red), adaptive reasoning (green).
Subfigure II: Logic-preserving variable perturbation and gold-answer generation via executable logic.
Subfigure III: RLVR optimization encouraging adaptive reasoning through comparative feedback.
๐ Highlights
- ๐ +8.5 Average Improvement across in-domain robustness tasks and out-of-domain tasks.
- ๐งฎ Only 9K synthetic data needed for significant gains.
- โ๏ธ Enable algebraic thinking and improved stability under scaling.
- ๐ Generalizable framework applicable to instruct models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning (2025)
- THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning (2025)
- ARM2: Adaptive Reasoning Model with Vision Understanding and Executable Code (2025)
- PromptCoT 2.0: Scaling Prompt Synthesis for Large Language Model Reasoning (2025)
- Can Structured Templates Facilitate LLMs in Tackling Harder Tasks? : An Exploration of Scaling Laws by Difficulty (2025)
- ScaleDiff: Scaling Difficult Problems for Advanced Mathematical Reasoning (2025)
- Learning to Reason in Structured In-context Environments with Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
