Improve model card: Add pipeline tag, library name, paper link, abstract, and update GitHub/citation (#1)
Browse files- Improve model card: Add pipeline tag, library name, paper link, abstract, and update GitHub/citation (6361c8c525c05149a9d75979c8aa8f3bd03a031e)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
|
@@ -1,19 +1,25 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
-
## TMLR-Group-HF/Entropy-Qwen3-4B-Base
|
| 5 |
|
| 6 |
-
|
| 7 |
|
| 8 |
-
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
|
|
|
|
|
|
| 18 |
}
|
| 19 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
library_name: transformers
|
| 5 |
---
|
|
|
|
| 6 |
|
| 7 |
+
# Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models
|
| 8 |
|
| 9 |
+
This is the **TMLR-Group-HF/Entropy-Qwen3-4B-Base** model. It is the Qwen3-4B-Base model trained by the Entropy Minimization method using the MATH training set, as presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
|
| 10 |
|
| 11 |
+
For more details on the Co-rewarding framework, code, and other checkpoints, please refer to the official GitHub repository: [https://github.com/tmlr-group/Co-rewarding](https://github.com/tmlr-group/Co-rewarding).
|
| 12 |
+
|
| 13 |
+
## Abstract
|
| 14 |
+
While reinforcement learning with verifiable rewards (RLVR) is effective to improve the reasoning ability of large language models (LLMs), its reliance on human-annotated labels leads to the scaling up dilemma, especially for complex tasks. Recent self-rewarding methods investigate a label-free alternative to unlock the reasoning capabilities of LLMs, yet they frequently encounter the non-negligible training collapse issue, as the single-view supervision signal easily forms the self-consistent illusion, yielding the reward hacking. Inspired by the success of self-supervised learning, we propose *Co-rewarding*, a novel self-supervised RL framework that improves training stability by seeking complementary supervision from another views. Specifically, we instantiate Co-rewarding in two ways: (1) *Co-rewarding-I* is a data-side instantiation that derives reward signals from contrastive agreement across semantically analogous questions; and (2) *Co-rewarding-II* is a model-side instantiation that maintains a slowly-updated reference teacher with pseudo labels to realize self-distillation. Intuitively, such instantiations introduce different levels of discrepancy to increase the difficulty of training collapse on trivial reasoning solutions. Empirically, Co-rewarding exhibits stable training across various setups, and outperforms other self-rewarding baselines by +3.31% improvements on average on multiple mathematical reasoning benchmarks, especially by +7.49% on Llama-3.2-3B-Instruct. Notably, Co-rewarding reaches or even surpasses RLVR with ground-truth (GT) label in several cases, such as a Pass@1 of 94.01% on GSM8K with Qwen3-8B-Base remarkably higher than GT.
|
| 15 |
|
| 16 |
+
## Citation
|
| 17 |
+
If you use our datasets or models, please cite our paper!
|
| 18 |
+
```bibtex
|
| 19 |
+
@article{zhang2025co,
|
| 20 |
+
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
|
| 21 |
+
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
|
| 22 |
+
journal={arXiv preprint arXiv:2508.00410},
|
| 23 |
+
year={2025}
|
| 24 |
}
|
| 25 |
```
|