Self-Certainty: Qwen3-4B-Base trained on DAPO-14k

This model is a Qwen3-4B-Base checkpoint trained by Self-Certainty Maximization using the DAPO-14k dataset, as part of the research presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.

The "Co-rewarding" framework is a novel self-supervised reinforcement learning (RL) framework designed to improve training stability by seeking complementary supervision from multiple views, addressing common challenges in self-rewarding methods for Large Language Models (LLMs). This specific model contributes to eliciting stronger reasoning abilities, particularly on mathematical reasoning benchmarks.

Paper: Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models Code Repository: https://github.com/tmlr-group/Co-rewarding


Model Description

This is the Qwen3-4B-Base model trained by Self-Certainty Maximization using the DAPO-14k training set.

For more details on the Co-rewarding framework, training procedures, and other checkpoints, please refer to the Github Repository.


Citation

If you use our datasets or models, please cite our paper!

@article{zhang2025co,
  title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
  author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
  journal={arXiv preprint arXiv:2508.00410},
  year={2025}
}
Downloads last month
27
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/Self-Certainty-Qwen3-4B-Base-DAPO14k

Quantizations
1 model

Collection including TMLR-Group-HF/Self-Certainty-Qwen3-4B-Base-DAPO14k