GT-GRPO: Llama-3.2-3B-Instruct trained on DAPO-14k

This is the Llama-3.2-3B-Instruct model trained by GT-GRPO using DAPO-14k training set, as presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.

If you are interested in Co-rewarding, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-rewarding].

Downloads last month
27
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TMLR-Group-HF/GT-Llama-3.2-3B-Instruct-DAPO14k

Quantizations
1 model

Collection including TMLR-Group-HF/GT-Llama-3.2-3B-Instruct-DAPO14k