Improve model card: Add pipeline tag, library name, and paper link

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +30 -3
README.md CHANGED
@@ -1,8 +1,35 @@
1
  ---
2
  license: mit
 
 
3
  ---
4
- ### Self-Certainty: Qwen3-4B-Base trained on DAPO-14k
5
 
6
- This is the Qwen3-4B-Base model trained by Self-Certainty Maximization using DAPO-14k training set.
7
 
8
- If you are interested in Co-rewarding, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-rewarding].
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
  ---
 
6
 
7
+ # Self-Certainty: Qwen3-4B-Base trained on DAPO-14k
8
 
9
+ This model is a Qwen3-4B-Base checkpoint trained by **Self-Certainty Maximization** using the DAPO-14k dataset, as part of the research presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
10
+
11
+ The "Co-rewarding" framework is a novel self-supervised reinforcement learning (RL) framework designed to improve training stability by seeking complementary supervision from multiple views, addressing common challenges in self-rewarding methods for Large Language Models (LLMs). This specific model contributes to eliciting stronger reasoning abilities, particularly on mathematical reasoning benchmarks.
12
+
13
+ **Paper:** [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410)
14
+ **Code Repository:** https://github.com/tmlr-group/Co-rewarding
15
+
16
+ ---
17
+
18
+ ## Model Description
19
+
20
+ This is the Qwen3-4B-Base model trained by Self-Certainty Maximization using the DAPO-14k training set.
21
+
22
+ For more details on the Co-rewarding framework, training procedures, and other checkpoints, please refer to the [Github Repository](https://github.com/tmlr-group/Co-rewarding).
23
+
24
+ ---
25
+
26
+ ## Citation
27
+ If you use our datasets or models, please cite our paper!
28
+ ```
29
+ @article{zhang2025co,
30
+ title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
31
+ author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
32
+ journal={arXiv preprint arXiv:2508.00410},
33
+ year={2025}
34
+ }
35
+ ```