resistz's picture
Improve model card: Add pipeline tag, library name, paper link, and detailed description (#1)
5a92140 verified
metadata
license: mit
pipeline_tag: text-generation
library_name: transformers

Co-rewarding: Qwen2.5-7B Model

This is the Qwen2.5-7B model trained by the Co-rewarding method using the MATH training set, as presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.

Co-rewarding Framework

Model Description

While reinforcement learning with verifiable rewards (RLVR) is effective to improve the reasoning ability of large language models (LLMs), its reliance on human-annotated labels leads to the scaling up dilemma, especially for complex tasks. Recent self-rewarding methods investigate a label-free alternative to unlock the reasoning capabilities of LLMs, yet they frequently encounter the non-negligible training collapse issue, as the single-view supervision signal easily forms the self-consistent illusion, yielding the reward hacking.

Co-rewarding is a novel self-supervised RL framework that improves training stability by seeking complementary supervision from another views. Specifically, Co-rewarding is instantiated in two ways:

  1. Co-rewarding-I: A data-side instantiation that derives reward signals from contrastive agreement across semantically analogous questions.
  2. Co-rewarding-II: A model-side instantiation that maintains a slowly-updated reference teacher with pseudo labels to realize self-distillation.

Intuitively, such instantiations introduce different levels of discrepancy to increase the difficulty of training collapse on trivial reasoning solutions. Empirically, Co-rewarding exhibits stable training across various setups, and outperforms other self-rewarding baselines by $+3.31%$ improvements on average on multiple mathematical reasoning benchmarks, especially by $+7.49%$ on Llama-3.2-3B-Instruct. Notably, Co-rewarding reaches or even surpasses RLVR with ground-truth (GT) label in several cases, such as a Pass@$1$ of $94.01%$ on GSM8K with Qwen3-8B-Base.

For more details about the Co-rewarding method, including code and training scripts, please refer to the official Github Repository.

Citation

If you use our datasets or models, please cite our paper:

@article{zhang2025co,
  title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
  author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
  journal={arXiv preprint arXiv:2508.00410},
  year={2025}
}