Improve model card: Add pipeline tag, library name, and paper link
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,8 +1,11 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
| 4 |
### Self-Certainty: Qwen3-8B-Base trained on DAPO-14k
|
| 5 |
|
| 6 |
-
This is the Qwen3-8B-Base model trained by Self-Certainty Maximization using DAPO-14k training set.
|
| 7 |
|
| 8 |
If you are interested in Co-rewarding, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-rewarding].
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
library_name: transformers
|
| 5 |
---
|
| 6 |
+
|
| 7 |
### Self-Certainty: Qwen3-8B-Base trained on DAPO-14k
|
| 8 |
|
| 9 |
+
This is the Qwen3-8B-Base model trained by Self-Certainty Maximization using DAPO-14k training set, as presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
|
| 10 |
|
| 11 |
If you are interested in Co-rewarding, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-rewarding].
|