Sicong nielsr HF Staff commited on
Commit
5f89dc7
·
verified ·
1 Parent(s): 3dea1a0

Improve model card for MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources (#1)

Browse files

- Improve model card for MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources (c99cd8b827a2bf6a0a8c77816bea0104fab797ae)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +240 -1
README.md CHANGED
@@ -1,2 +1,241 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  [![arXiv](https://img.shields.io/badge/arXiv-2509.21268-b31b1b.svg)](https://arxiv.org/abs/2509.21268)
2
- [![Hugging Face](https://img.shields.io/badge/HuggingFace-MMR1-FFAE1A)](https://huggingface.co/papers/2509.21268)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
+ tags:
6
+ - multimodal-llm
7
+ - reasoning
8
+ - math
9
+ datasets:
10
+ - MMR1/MMR1-SFT
11
+ - MMR1/MMR1-RL
12
+ ---
13
+
14
+ ---
15
+ license: apache-2.0
16
+ library_name: transformers
17
+ pipeline_tag: image-text-to-text
18
+ tags:
19
+ - multimodal-llm
20
+ - reasoning
21
+ - math
22
+ datasets:
23
+ - MMR1/MMR1-SFT
24
+ - MMR1/MMR1-RL
25
+ ---
26
+
27
  [![arXiv](https://img.shields.io/badge/arXiv-2509.21268-b31b1b.svg)](https://arxiv.org/abs/2509.21268)
28
+ [![Hugging Face](https://img.shields.io/badge/HuggingFace-MMR1-FFAE1A)](https://huggingface.co/papers/2509.21268)
29
+
30
+ # MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources
31
+
32
+ ## Paper Abstract
33
+ Large multimodal reasoning models have achieved rapid progress, but their advancement is constrained by two major limitations: the absence of open, large-scale, high-quality long chain-of-thought (CoT) data, and the instability of reinforcement learning (RL) algorithms in post-training. Group Relative Policy Optimization (GRPO), the standard framework for RL fine-tuning, is prone to gradient vanishing when reward variance is low, which weakens optimization signals and impairs convergence. This work makes three contributions: (1) We propose Variance-Aware Sampling (VAS), a data selection strategy guided by Variance Promotion Score (VPS) that combines outcome variance and trajectory diversity to promote reward variance and stabilize policy optimization. (2) We release large-scale, carefully curated resources containing ~1.6M long CoT cold-start data and ~15k RL QA pairs, designed to ensure quality, difficulty, and diversity, along with a fully reproducible end-to-end training codebase. (3) We open-source a family of multimodal reasoning models in multiple scales, establishing standardized baselines for the community. Experiments across mathematical reasoning benchmarks demonstrate the effectiveness of both the curated data and the proposed VAS. Comprehensive ablation studies and analyses provide further insight into the contributions of each component. In addition, we theoretically establish that reward variance lower-bounds the expected policy gradient magnitude, with VAS serving as a practical mechanism to realize this guarantee. Our code, data, and checkpoints are available at this https URL .
34
+
35
+ ## Code and Project Links
36
+ - **GitHub Repository:** [https://github.com/LengSicong/MMR1](https://github.com/LengSicong/MMR1)
37
+ - **Paper on Hugging Face:** [https://huggingface.co/papers/2509.21268](https://huggingface.co/papers/2509.21268)
38
+
39
+ <p align="center">
40
+ <img src="https://github.com/LengSicong/MMR1/blob/main/assets/logo.png?raw=true" width="150" style="margin-bottom: 0.2;"/>
41
+ <p>
42
+
43
+ <h3 align="center"><a href="https://huggingface.co/papers/2509.21268" style="color:#9C276A">
44
+ MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources</a></h3>
45
+ <h5 align="center"> If our project helps you, please give us a star ⭐ on GitHub and upvote our HF paper to support us. 🙏🙏 </h2>
46
+
47
+
48
+ <h5 align="center">
49
+
50
+ [![hf_data](https://img.shields.io/badge/🤗-Dataset-9C276A.svg)](https://huggingface.co/MMR1/datasets)
51
+ [![hf_checkpoint](https://img.shields.io/badge/🤗-Checkpoints-9C276A.svg)](https://huggingface.co/MMR1/models)
52
+ [![hf_paper](https://img.shields.io/badge/🤗-Paper%20In%20HF-red.svg)](https://huggingface.co/papers/2509.21268)
53
+ [![arXiv](https://img.shields.io/badge/Arxiv-2509.21268-AD1C18.svg?logo=arXiv)](https://arxiv.org/abs/2509.21268)
54
+ <br>
55
+ </h5>
56
+
57
+ ## 📰 News
58
+ * **[2025.09.25]** 🔥🔥 Release [technical report](https://huggingface.co/papers/2509.21268)!
59
+ * **[2025.09.25]** 🚀🚀 Release MMR1-SFT (~16M) and MMR1-RL (15k) datasets!
60
+ * **[2025.09.25]** 🚀🚀 Release MMR1-3B and MMR1-7B, 32B checkpoint are on the way!
61
+ * **[2025.09.25]** Old repo are now moved to the branch [mmr1_v0](https://github.com/LengSicong/MMR1/tree/mmr1_v0?tab=readme-ov-file).
62
+ * **[2025.03.11]** 🔥🔥 Release MMR1-Math-v0-7B, achieving SOTA with only **6k public training data**!
63
+
64
+ <h2><img src="https://github.com/LengSicong/MMR1/blob/main/assets/logo.png?raw=true" width="25"> Introduction</h2>
65
+
66
+ This repo introduces our work on enhancing multimodal reasoning models. Current progress is limited by:
67
+
68
+ - ❌ **Lack of open, large-scale, high-quality long chain-of-thought (CoT) data**
69
+ - ❌ **Instability of RL fine-tuning**, where standard GRPO often suffers from *gradient vanishing* under low reward variance
70
+
71
+ ### 🔑 Our Contributions
72
+ - **Variance-Aware Sampling (VAS):**
73
+ A new data selection strategy guided by the *Variance Promotion Score (VPS)*. VAS combines outcome variance and trajectory diversity to promote reward variance, stabilize policy optimization, and improve convergence.
74
+
75
+ - **Large-scale curated resources:**
76
+ - ~1.6M long CoT cold-start trajectories with verified short answer
77
+ - ~15k RL QA pairs
78
+ - Designed for **quality, difficulty, and diversity**
79
+
80
+ - **Open-source codebase & models:**
81
+ - Fully reproducible end-to-end training pipeline
82
+ - Released models at multiple scales as standardized baselines for multimodal reasoning
83
+
84
+ Please refer to our [TRAIN.md](https://github.com/LengSicong/MMR1/blob/main/TRAIN.md) for detailed instructions on training with VAS.
85
+
86
+ ## 💡 Methodology Overview
87
+ Our method introduces **Variance-Aware Sampling (VAS)** to address the *gradient vanishing problem* in reinforcement learning with Group Relative Policy Optimization (GRPO).
88
+
89
+ <p align="center">
90
+ <img src="https://github.com/LengSicong/MMR1/blob/main/assets/fig1.png?raw=true" alt="Overview of the VAS framework" width="700"/>
91
+ </p>
92
+
93
+ ### 🔹 Framework
94
+ As illustrated in **Figure 1**, training begins with a pool of prompts from the dataset:
95
+ 1. A **random sampler** provides uniform coverage of data.
96
+ 2. A **weighted sampler**, guided by Variance Promotion Score (VPS), prioritizes prompts with higher reward variance and trajectory diversity.
97
+ 3. These two sources are combined to form training batches, balancing exploration and coverage.
98
+ 4. The policy model generates rollouts, which are evaluated with rewards and used to update the policy. VPS scores are periodically re-estimated as the model improves, ensuring dynamic adaptation.
99
+
100
+ This design ensures that training consistently focuses on prompts that provide strong learning signals, while still maintaining sufficient randomness for coverage.
101
+
102
+ <p align="center">
103
+ <img src="https://github.com/LengSicong/MMR1/blob/main/assets/algo1.png?raw=true" alt="algo" width="700"/>
104
+ </p>
105
+
106
+ ### 🔹 Algorithm
107
+ **Algorithm 1** provides a step-by-step description of VAS within the GRPO framework:
108
+ - **Initialization:** For each prompt, multiple rollouts are sampled to estimate pass rate, outcome variance (OVS), trajectory diversity (TDS), and VPS.
109
+ - **Periodic VPS update:** At specified intervals, these statistics are refreshed to reflect the evolving policy.
110
+ - **Batch construction:** A mixture of prompts is drawn—some uniformly at random, others proportionally to VPS—controlled by the mixture ratio λ.
111
+ - **Policy optimization:** Rollouts are generated for the selected prompts, GRPO loss is computed, and the policy parameters are updated accordingly.
112
+
113
+ By adaptively steering training toward prompts with higher reward variance, VAS effectively stabilizes optimization and amplifies gradient signals, enabling more efficient and robust learning.
114
+
115
+ ## 📦 Open Resources
116
+
117
+ We release the following resources for the community:
118
+ - **[MMR1-SFT](https://huggingface.co/datasets/MMR1/MMR1-SFT) (~16M):** Supervised fine-tuning dataset with 16M long CoT cold-start trajectories (Gemini2.5 Pro/Flash) with verified short answer (GPT-4o)
119
+ - **[MMR1-RL](https://huggingface.co/datasets/MMR1/MMR1-RL) (15k):** RL dataset with 15k question-answer pairs (GPT-4o)
120
+ - **[MMR1-3B-SFT](https://huggingface.co/MMR1/MMR1-3B-SFT):** 3B checkpoint trained with MMR1-SFT
121
+ - **[MMR1-3B-RL](https://huggingface.co/MMR1/MMR1-3B-RL):** 3B checkpoint trained with MMR1-SFT and MMR1-RL
122
+ - **[MMR1-7B-SFT](https://huggingface.co/MMR1/MMR1-7B-SFT):** 7B checkpoint trained with MMR1-SFT
123
+ - **[MMR1-7B-RL](https://huggingface.co/MMR1/MMR1-7B-RL):** 7B checkpoint trained with MMR1-SFT and MMR1-RL
124
+ - **[MMR1-32B-SFT](https://huggingface.co/MMR1/MMR1-32B-SFT):** 32B checkpoint trained with MMR1-SFT
125
+ - **[MMR1-32B-RL](https://huggingface.co/MMR1/MMR1-32B-RL):** 32B checkpoint trained with MMR1-SFT and MMR1-RL (On the way!)
126
+
127
+
128
+ <p align="center">
129
+ <img src="https://github.com/LengSicong/MMR1/blob/main/assets/data.png?raw=true" alt="data" width="700"/>
130
+ </p>
131
+
132
+ The dataset spans diverse domains—including mathematics, science, charts/figures, document tables, and general understanding—covering ~1.6M math samples and an additional ~37K samples across other domains. It integrates existing public resources (e.g., MathVerse, ScienceQA, ChartQA, DocVQA, GQA) together with newly curated and self-collected data, ensuring quality, difficulty, and diversity. This collection establishes one of the most comprehensive open resources for multimodal reasoning models.
133
+ We hope these resources can serve as a benchmark for the community and facilitate the research of multimodal reasoning.
134
+
135
+ ## 📊 Evaluation Results
136
+
137
+ We evaluate our models on a suite of **mathematics-related multimodal reasoning benchmarks** (MathVerse, MathVista, MathVision, LogicVista, and ChartQA).
138
+
139
+ <p align="center">
140
+ <img src="https://github.com/LengSicong/MMR1/blob/main/assets/result.png?raw=true" alt="result" width="700"/>
141
+ </p>
142
+
143
+ - **MMR1-7B-RL** achieves an average score of **58.4**, establishing new state-of-the-art performance among 7B-scale reasoning models.
144
+ - **MMR1-3B-RL** performs competitively with **52.7**, showing strong reasoning ability even at smaller scale.
145
+ - Our models consistently outperform or match larger baselines, demonstrating the effectiveness of **Variance-Aware Sampling (VAS)** and our curated **long CoT training data**.
146
+
147
+ ## 🔍 Analysis of VAS Training Dynamics
148
+
149
+ We further analyze the effectiveness of **Variance-Aware Sampling (VAS)** through training efficiency and the evolution of **Variance Promotion Score (VPS)**.
150
+
151
+ <p align="center">
152
+ <img src="https://github.com/LengSicong/MMR1/blob/main/assets/anal1.png?raw=true" alt="anal1" width="700"/>
153
+ </p>
154
+
155
+ **Training Efficiency (Fig. 2).**
156
+ - **Gradient norm**: VAS substantially amplifies gradient magnitudes compared to the vanilla baseline, mitigating the gradient vanishing issue. This indicates that VAS consistently provides stronger optimization signals.
157
+ - **Clip fraction**: Higher clipping fractions in VAS runs suggest that policy updates are closer to the trust-region boundary, enabling more effective utilization of the learning signal without destabilizing training.
158
+ - **Validation accuracy**: Both full VAS (λ = 1.0) and mixed VAS–random sampling (λ = 0.5) converge faster and achieve higher final accuracy than the baseline, demonstrating that VAS improves both efficiency and performance. Notably, the mixed strategy achieves competitive results while maintaining broader data coverage.
159
+
160
+ <p align="center">
161
+ <img src="https://github.com/LengSicong/MMR1/blob/main/assets/anal2.png?raw=true" alt="anal2" width="700"/>
162
+ </p>
163
+
164
+ **VPS Dynamics (Fig. 3).**
165
+ - **Score distribution**: VPS distributions evolve from relatively uniform at the beginning of training to more concentrated in the middle bins, suggesting convergence in identifying consistently informative prompts.
166
+ - **Weight transitions**: Transition matrices show that many prompts shift across bins over time, with both upward and downward movements, reflecting the dynamic nature of reward variance as the policy evolves. Early transitions are more widespread, while later updates become more stable, consistent with convergence.
167
+ - **Interpretation**: This dynamic reweighting ensures that the model continually prioritizes prompts with higher variance while still allowing redistribution as learning progresses, preventing overfitting to a static subset of data.
168
+
169
+ 👉 Together, these analyses highlight how **VAS effectively mitigates gradient vanishing, improves sample efficiency, and adapts dynamically to the evolving training landscape.**
170
+
171
+ ## 🎨 Qualitative Demo
172
+
173
+ To illustrate the reasoning capability of our models, we provide qualitative examples from **MathVerse**.
174
+ The demo showcases how the model carefully analyzes the problem, plans a structured solution, executes step-by-step reasoning, verifies results, and even provides alternative solution paths.
175
+
176
+ <p align="center">
177
+ <img src="https://github.com/LengSicong/MMR1/blob/main/assets/demo.png?raw=true" alt="demo" width="700"/>
178
+ </p>
179
+
180
+ This demonstrates the model’s ability to maintain logical consistency, perform reflective verification, and present human-readable reasoning traces.
181
+
182
+ ## 🤝 Contribution and Contact
183
+ This project is still under active development. Community feedback and contributions are highly appreciated. If you want to contribute, please feel free to make a pull request or create an issue.
184
+
185
+
186
+ ## 👍 Acknowledgement
187
+ Our MMR1 is build on top of [Qwen2.5VL](https://github.com/QwenLM/Qwen2.5-VL), [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) and [EasyR1](https://github.com/hiyouga/EasyR1/tree/main).
188
+ Besides, our MMR1 benefits from tons of open-source efforts. We sincerely appreciate these efforts and compile a list in [ACKNOWLEDGEMENT.md](https://github.com/LengSicong/MMR1/blob/main/ACKNOWLEDGEMENT.md) to express our gratitude. If your work is used in MMR1 but not mentioned in either this repo or the technical report, feel free to let us know :heart:.
189
+
190
+ <details open><summary>💡 Some other multimodal-LLM projects from our team may interest you ✨. </summary><p>
191
+
192
+ > [**VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding**](https://github.com/DAMO-NLP-SG/VideoLLaMA3) <br>
193
+ > Boqiang Zhang<sup>* </sup>, Kehan Li<sup>* </sup>, Zesen Cheng<sup>* </sup>, Zhiqiang Hu<sup>* </sup>, Yuqian Yuan<sup>* </sup>, Guanzheng Chen<sup>* </sup>, Sicong Leng<sup>* </sup>, Yuming Jiang<sup>* </sup>, Hang Zhang<sup>* </sup>, Xin Li<sup>* </sup>, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao <br>
194
+ [![github](https://img.shields.io/badge/-Github-black?logo=github)](https://github.com/DAMO-NLP-SG/VideoLLaMA3) [![github](https://img.shields.io/github/stars/DAMO-NLP-SG/VideoLLaMA3.svg?style=social)](https://github.com/DAMO-NLP-SG/VideoLLaMA3) [![arXiv](https://img.shields.io/badge/Arxiv-2501.13106-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2501.13106) <br>
195
+
196
+ > [**VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs**](https://github.com/DAMO-NLP-SG/VideoLLaMA2) <br>
197
+ > Zesen Cheng*, Sicong Leng*, Hang Zhang*, Yifei Xin*, Xin Li*, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, Lidong Bing <br>
198
+ [![github](https://img.shields.io/badge/-Github-black?logo=github)](https://github.com/DAMO-NLP-SG/VideoLLaMA2) [![github](https://img.shields.io/github/stars/DAMO-NLP-SG/VideoLLaMA2.svg?style=social)](https://github.com/DAMO-NLP-SG/VideoLLaMA2) [![arXiv](https://img.shields.io/badge/Arxiv-2406.07476-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2406.07476) <be>
199
+
200
+ > [**VCD: Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding**](https://arxiv.org/abs/2311.16922) <br>
201
+ > Sicong Leng*, Hang Zhang*, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, Lidong Bing <br>
202
+ [![github](https://img.shields.io/badge/-Github-black?logo=github)](https://github.com/DAMO-NLP-SG/VCD) [![github](https://img.shields.io/github/stars/DAMO-NLP-SG/VCD.svg?style=social)](https://github.com/DAMO-NLP-SG/VCD) [![arXiv](https://img.shields.io/badge/Arxiv-2311.16922-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2311.16922) <br>
203
+
204
+ > [**The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio**](https://arxiv.org/abs/2410.12787) <br>
205
+ > Sicong Leng*, Yun Xing*, Zesen Cheng*, Yang Zhou, Hang Zhang, Xin Li, Deli Zhao, Shijian Lu, Chunyan Miao, Lidong Bing <br>
206
+ [![github](https://img.shields.io/badge/-Github-black?logo=github)](https://github.com/DAMO-NLP-SG/CMM) [![github](https://img.shields.io/github/stars/DAMO-NLP-SG/CMM.svg?style=social)](https://github.com/DAMO-NLP-SG/CMM) [![arXiv](https://img.shields.io/badge/Arxiv-2410.12787-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2410.12787) <br>
207
+
208
+ > [**Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss**](https://arxiv.org/abs/2410.17243) <br>
209
+ > Zesen Cheng*, Hang Zhang*, Kehan Li*, Sicong Leng, Zhiqiang Hu, Fei Wu, Deli Zhao, Xin Li, Lidong Bing <br>
210
+ [![github](https://img.shields.io/badge/-Github-black?logo=github)](https://github.com/DAMO-NLP-SG/Inf-CLIP) [![github](https://img.shields.io/github/stars/DAMO-NLP-SG/Inf-CLIP.svg?style=social)](https://github.com/DAMO-NLP-SG/Inf-CLIP) [![arXiv](https://img.shields.io/badge/Arxiv-2410.17243-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2410.17243) <br>
211
+
212
+ > [**VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM**](https://arxiv.org/abs/2501.00599) <br>
213
+ > Yuqian Yuan, Hang Zhang, Wentong Li, Zesen Cheng, Boqiang Zhang, Long Li, Xin Li, Deli Zhao, Wenqiao Zhang, Yueting Zhuang, Jianke Zhu, Lidong Bing <br>
214
+ [![github](https://img.shields.io/badge/-Github-black?logo=github)](https://github.com/DAMO-NLP-SG/VideoRefer) [![github](https://img.shields.io/github/stars/DAMO-NLP-SG/VideoRefer.svg?style=social)](https://github.com/DAMO-NLP-SG/VideoRefer) [![arXiv](https://img.shields.io/badge/Arxiv-2501.00599-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2501.00599) <br>
215
+
216
+ </p></details>
217
+
218
+ ## 📑 Citation
219
+
220
+ If you find MMR1 useful for your research and applications, please cite using this BibTeX:
221
+
222
+ ```bibtex
223
+ @misc{leng2025mmr1,
224
+ title={MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources},
225
+ author={Sicong Leng and Jing Wang and Jiaxi Li and Hao Zhang and Zhiqiang Hu and Boqiang Zhang and Yuming Jiang and Hang Zhang and Xin Li and Lidong Bing and Deli Zhao and Wei Lu and Yu Rong and Aixin Sun and Shijian Lu},
226
+ year={2025},
227
+ eprint={2509.21268},
228
+ archivePrefix={arXiv},
229
+ primaryClass={cs.CV},
230
+ url={https://arxiv.org/abs/2509.21268},
231
+ }
232
+ ```
233
+
234
+ ## 🔒 License
235
+
236
+ This project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/LengSicong/MMR1/blob/main/LICENSE) file.
237
+ The service is a research preview intended for **non-commercial use ONLY**, subject to the model Licenses of Qwen, Terms of Use of the data generated by OpenAI and Gemini, and Privacy Practices of ShareGPT. Please get in touch with us if you find any potential violations.
238
+
239
+ ## Star History
240
+
241
+ [![Star History Chart](https://api.star-history.com/svg?repos=LengSicong/MMR1&type=Date)](https://star-history.com/#LengSicong/MMR1&Date)