TMLR-Group-HF/Entropy-Qwen2.5-7B

This model is a Qwen2.5-7B checkpoint trained by the Entropy Minimization method using the MATH training set, as part of the Co-rewarding framework.

Co-rewarding is a novel self-supervised reinforcement learning (RL) framework designed to enhance the reasoning capabilities of large language models (LLMs). It addresses training stability issues by incorporating complementary supervision from multiple views, thus mitigating the "self-consistent illusion" and reward hacking often seen in single-view self-rewarding approaches. The framework offers two instantiations: Co-rewarding-I (data-side, using contrastive agreement) and Co-rewarding-II (model-side, using self-distillation with a reference teacher), both designed to introduce necessary discrepancies to prevent trivial reasoning solutions from collapsing training.

For more in-depth information on the Co-rewarding framework, its methodology, and experimental results, please refer to the official paper: Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models

The code, installation instructions, training procedures, and other related checkpoints and datasets are available on the project's GitHub repository: https://github.com/tmlr-group/Co-rewarding

Citation

@article{zhang2025co,
  title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
  author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
  journal={arXiv preprint arXiv:2508.00410},
  year={2025}
}
Downloads last month
12
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/Entropy-Qwen2.5-7B-MATH

Quantizations
4 models

Collection including TMLR-Group-HF/Entropy-Qwen2.5-7B-MATH