Text Generation
Safetensors
English
llama
SinclairWang commited on
Commit
fcab1c2
·
verified ·
1 Parent(s): cd1fd02

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ datasets:
4
+ - OctoThinker/MegaMath-Web-Pro-Max
5
+ - LLM360/MegaMath
6
+ language:
7
+ - en
8
+ base_model:
9
+ - meta-llama/Llama-3.2-3B
10
+ pipeline_tag: text-generation
11
+ ---
12
+
13
+ # [OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling](https://arxiv.org/abs/2506.20512)
14
+
15
+
16
+
17
+ ## OctoThinker-3B-Hybrid-Base
18
+
19
+
20
+ The OctoThinker family is built on carefully studied mid-training insights, starting from the Llama-3 family, to create a reinforcement learning–friendly base language model.
21
+
22
+ ### Training Recipe
23
+
24
+ <div style="display: flex; justify-content: left; gap: 20px;">
25
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/olNNY0cy0wVxAQh2VwewO.png" alt="Data Pipeline" style="width:90%;">
26
+
27
+ </div>
28
+
29
+
30
+
31
+
32
+ ### Evaluation Results
33
+
34
+ Note that we adopt the few-shot prompting evaluation for these base language models.
35
+
36
+
37
+ <div style="display: flex; justify-content: left; gap: 20px;">
38
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/UCZ9MahRYqLY0iKjiWMqS.png" alt="Data Pipeline" style="width:80%;">
39
+
40
+ </div>
41
+
42
+
43
+ ### More about OctoThinker
44
+
45
+
46
+ <div style="display: flex; justify-content: left; gap: 20px;">
47
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/bn85CEB_DW6azJ7KJp11Q.png" alt="Data Pipeline" style="width:100%;">
48
+ </div>
49
+
50
+
51
+ ## Citation
52
+
53
+ Check out our [paper](https://arxiv.org/abs/2506.20512) for more details. If you use our models, datasets or find our work useful, please cite
54
+
55
+ ```
56
+ @article{wang2025octothinker,
57
+ title={OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling},
58
+ author={Wang, Zengzhi and Zhou, Fan and Li, Xuefeng and Liu, Pengfei},
59
+ year={2025},
60
+ journal={arXiv preprint arXiv:2506.20512},
61
+ note={Preprint}
62
+ }
63
+ ```