Add Github link, Transformers library, pipeline tag (#1)
Browse files- Add Github link, Transformers library, pipeline tag (dae500398dc94f74f11dc2ca96e021f98cd48804)
- Update README.md (f2ae93db76e03d9ade872fa899b32020d484b424)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
|
@@ -1,5 +1,18 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
datasets:
|
| 4 |
- Satori-reasoning/Satori_FT_data
|
| 5 |
- Satori-reasoning/Satori_RL_data
|
|
@@ -112,7 +125,7 @@ Satori-7B-Round2 achieves SOTA performance and outperforms Qwen-2.5-Math-7B-Inst
|
|
| 112 |
| | OpenMath2-Llama3.1-8B | 90.5 | 67.8 | 28.9 | 37.5 | 6.7 | 46.3 |
|
| 113 |
| | NuminaMath-7B-CoT | 78.9 | 54.6 | 15.9 | 20.0 | 10.0 | 35.9 |
|
| 114 |
| | Qwen-2.5-7B-Instruct | 91.6 | 75.5 | 35.5 | 52.5 | 6.7 | 52.4 |
|
| 115 |
-
| | Qwen-2.5-Math-7B-Instruct |95.2
|
| 116 |
| | **Satori-7B-Round2** | 93.9 | 83.6 | 48.5 | 72.5 | 23.3 | **64.4** |
|
| 117 |
|
| 118 |
### **General Domain Reasoning Benchmarks**
|
|
@@ -140,6 +153,8 @@ Please refer to our blog and research paper for more technical details of Satori
|
|
| 140 |
- [Blog](https://satori-reasoning.github.io/blog/satori/)
|
| 141 |
- [Paper](https://arxiv.org/pdf/2502.02508)
|
| 142 |
|
|
|
|
|
|
|
| 143 |
# **Citation**
|
| 144 |
If you find our model and data helpful, please cite our paper:
|
| 145 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
+
datasets:
|
| 6 |
+
- Satori-reasoning/Satori_FT_data
|
| 7 |
+
- Satori-reasoning/Satori_RL_data
|
| 8 |
+
base_model:
|
| 9 |
+
- Qwen/Qwen2.5-Math-7B
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
---
|
| 13 |
+
license: apache-2.0
|
| 14 |
+
library_name: transformers
|
| 15 |
+
pipeline_tag: text-generation
|
| 16 |
datasets:
|
| 17 |
- Satori-reasoning/Satori_FT_data
|
| 18 |
- Satori-reasoning/Satori_RL_data
|
|
|
|
| 125 |
| | OpenMath2-Llama3.1-8B | 90.5 | 67.8 | 28.9 | 37.5 | 6.7 | 46.3 |
|
| 126 |
| | NuminaMath-7B-CoT | 78.9 | 54.6 | 15.9 | 20.0 | 10.0 | 35.9 |
|
| 127 |
| | Qwen-2.5-7B-Instruct | 91.6 | 75.5 | 35.5 | 52.5 | 6.7 | 52.4 |
|
| 128 |
+
| | Qwen-2.5-Math-7B-Instruct | 95.2 | 83.6 | 41.6 | 62.5 | 16.7 | 59.9 |
|
| 129 |
| | **Satori-7B-Round2** | 93.9 | 83.6 | 48.5 | 72.5 | 23.3 | **64.4** |
|
| 130 |
|
| 131 |
### **General Domain Reasoning Benchmarks**
|
|
|
|
| 153 |
- [Blog](https://satori-reasoning.github.io/blog/satori/)
|
| 154 |
- [Paper](https://arxiv.org/pdf/2502.02508)
|
| 155 |
|
| 156 |
+
For code, see https://github.com/Satori-reasoning/Satori
|
| 157 |
+
|
| 158 |
# **Citation**
|
| 159 |
If you find our model and data helpful, please cite our paper:
|
| 160 |
```
|