wukeming11 nielsr HF Staff commited on
Commit
7ecc53f
·
verified ·
1 Parent(s): ac1c9c5

Enhance dataset card with task categories, tags, license, and overview (#2)

Browse files

- Enhance dataset card with task categories, tags, license, and overview (931bb4e50aa826ad2449855684dc1cd79e52c8b0)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +18 -15
README.md CHANGED
@@ -3,38 +3,43 @@ configs:
3
  - config_name: algopuzzle
4
  data_files:
5
  - split: train
6
- path: "algopuzzle_train.parquet"
7
  - config_name: mmk12
8
  data_files:
9
  - split: train
10
- path: "mmk12_train.parquet"
11
  - config_name: thinklite_vl_hard
12
  data_files:
13
  - split: train
14
- path: "thinklite_vl_hard_train.parquet"
15
  - config_name: tqa_train
16
  data_files:
17
  - split: train
18
- path: "tqa_train.parquet"
19
  - config_name: virl39k
20
  data_files:
21
  - split: train
22
- path: "virl39k_train.parquet"
23
  - config_name: wemath_pro
24
  data_files:
25
  - split: train
26
- path: "wemath_pro.parquet"
27
  - config_name: wemath_standard
28
  data_files:
29
  - split: train
30
- path: "wemath_standard.parquet"
31
  - config_name: validation
32
  data_files:
33
  - split: val
34
- path: "val.parquet"
 
 
 
 
 
 
35
  ---
36
 
37
-
38
  # OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe
39
 
40
  <div align="center">
@@ -45,6 +50,9 @@ configs:
45
  [![Github](https://img.shields.io/badge/Code-000000?style=for-the-badge&logo=github&logoColor=white)](https://github.com/EvolvingLMMs-Lab/OpenMMReasoner)
46
  </div>
47
 
 
 
 
48
  Here are the RL Data used to train **[OpenMMReasoner-RL](https://huggingface.co/OpenMMReasoner/OpenMMReasoner-RL)**. We use **[verl](https://github.com/volcengine/verl)** as the training framework.
49
 
50
  To use this dataset, first snapshot-download the entire repository to your local machine. After that, you can load the dataset using the example script provided in our GitHub repository by pointing it to your local data folder and the parquet file.
@@ -65,9 +73,4 @@ ray job submit --address="http://127.0.0.1:8265" \
65
  data.val_files=${DATA_FOLDER}/val.parquet \
66
 
67
 
68
- ... rest of the command args ...
69
-
70
- ```
71
-
72
-
73
-
 
3
  - config_name: algopuzzle
4
  data_files:
5
  - split: train
6
+ path: algopuzzle_train.parquet
7
  - config_name: mmk12
8
  data_files:
9
  - split: train
10
+ path: mmk12_train.parquet
11
  - config_name: thinklite_vl_hard
12
  data_files:
13
  - split: train
14
+ path: thinklite_vl_hard_train.parquet
15
  - config_name: tqa_train
16
  data_files:
17
  - split: train
18
+ path: tqa_train.parquet
19
  - config_name: virl39k
20
  data_files:
21
  - split: train
22
+ path: virl39k_train.parquet
23
  - config_name: wemath_pro
24
  data_files:
25
  - split: train
26
+ path: wemath_pro.parquet
27
  - config_name: wemath_standard
28
  data_files:
29
  - split: train
30
+ path: wemath_standard.parquet
31
  - config_name: validation
32
  data_files:
33
  - split: val
34
+ path: val.parquet
35
+ task_categories:
36
+ - image-text-to-text
37
+ tags:
38
+ - sft
39
+ - reinforcement-learning
40
+ license: cc-by-nc-4.0
41
  ---
42
 
 
43
  # OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe
44
 
45
  <div align="center">
 
50
  [![Github](https://img.shields.io/badge/Code-000000?style=for-the-badge&logo=github&logoColor=white)](https://github.com/EvolvingLMMs-Lab/OpenMMReasoner)
51
  </div>
52
 
53
+ ## Overview
54
+ Recent advancements in large reasoning models have fueled growing interest in extending such capabilities to multimodal domains. However, despite notable progress in visual reasoning, the lack of transparent and reproducible data curation and training strategies remains a major barrier to scalable research. In this work, we introduce OpenMMReasoner, a fully transparent two-stage recipe for multimodal reasoning spanning supervised fine-tuning (SFT) and reinforcement learning (RL). In the SFT stage, we construct an 874K-sample cold-start dataset with rigorous step-by-step validation, providing a strong foundation for reasoning capabilities. The subsequent RL stage leverages a 74K-sample dataset across diverse domains to further sharpen and stabilize these abilities, resulting in a more robust and efficient learning process. Extensive evaluations demonstrate that our training recipe not only surpasses strong baselines but also highlights the critical role of data quality and training design in shaping multimodal reasoning performance. Notably, our method achieves a 11.6% improvement over the Qwen2.5-VL-7B-Instruct baseline across nine multimodal reasoning benchmarks, establishing a solid empirical foundation for future large-scale multimodal reasoning research.
55
+
56
  Here are the RL Data used to train **[OpenMMReasoner-RL](https://huggingface.co/OpenMMReasoner/OpenMMReasoner-RL)**. We use **[verl](https://github.com/volcengine/verl)** as the training framework.
57
 
58
  To use this dataset, first snapshot-download the entire repository to your local machine. After that, you can load the dataset using the example script provided in our GitHub repository by pointing it to your local data folder and the parquet file.
 
73
  data.val_files=${DATA_FOLDER}/val.parquet \
74
 
75
 
76
+ ... rest of the command args ...