nielsr HF Staff commited on
Commit
8115975
Β·
verified Β·
1 Parent(s): 8ca7af0

Add task category, paper, project page, and code links, and update citation

Browse files

This PR significantly enhances the MM-OPERA dataset card by:
- Adding `image-text-to-text` to the `task_categories` metadata, improving discoverability for relevant research.
- Including direct links to the associated paper ([https://huggingface.co/papers/2510.26937](https://huggingface.co/papers/2510.26937)), the project page ([https://mm-opera-bench.github.io/](https://mm-opera-bench.github.io/)), and the GitHub repository ([https://github.com/MM-OPERA-Bench/MM-OPERA](https://github.com/MM-OPERA-Bench/MM-OPERA)) at the top of the README.
- Updating the citation BibTeX to reflect the NeurIPS 2025 publication, as indicated in the project's GitHub repository.

These changes provide a more comprehensive and accessible resource for researchers utilizing the MM-OPERA dataset.

Files changed (1) hide show
  1. README.md +13 -11
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
 
2
  tags:
3
  - Multimodal
 
 
4
  dataset_info:
5
  features:
6
  - name: id
@@ -58,8 +61,10 @@ dataset_info:
58
  dtype: string
59
  - name: img_id3
60
  dtype: string
 
61
  - name: filename3
62
  dtype: string
 
63
  - name: description3
64
  dtype: string
65
  split: ica
@@ -85,11 +90,12 @@ configs:
85
  path: data/ria-*
86
  - split: ica
87
  path: data/ica-*
88
- license: cc-by-4.0
89
  ---
90
 
91
  # MM-OPERA: Multi-Modal OPen-Ended Reasoning-guided Association Benchmark 🧠🌐
92
 
 
 
93
  ## Overview πŸ“–
94
 
95
  MM-OPERA is a benchmark designed to evaluate the open-ended association reasoning capabilities of Large Vision-Language Models (LVLMs). With 11,497 instances, it challenges models to identify and express meaningful connections across distant concepts in an open-ended format, mirroring human-like reasoning. The dataset spans diverse cultural, linguistic, and thematic contexts, making it a robust tool for advancing multimodal AI research. 🌍✨
@@ -163,14 +169,10 @@ Explore MM-OPERA to unlock the next level of multimodal association reasoning!
163
  If you use this dataset in your work, please cite it as follows:
164
 
165
  ```bibtex
166
- @misc{huang2025mmopera,
167
- author = {Zimeng Huang and Jinxin Ke and Xiaoxuan Fan and Yufeng Yang and Yang Liu and Liu Zhonghan and Zedi Wang and Junteng Dai and Haoyi Jiang and Yuyu Zhou and Keze Wang and Ziliang Chen},
168
- title = {MM-OPERA},
169
- month = {oct},
170
- year = {2025},
171
- publisher = {Zenodo},
172
- version = {1.0.0},
173
- doi = {10.5281/zenodo.17300924},
174
- url = {https://doi.org/10.5281/zenodo.17300924}
175
  }
176
- ```
 
1
  ---
2
+ license: cc-by-4.0
3
  tags:
4
  - Multimodal
5
+ task_categories:
6
+ - image-text-to-text
7
  dataset_info:
8
  features:
9
  - name: id
 
61
  dtype: string
62
  - name: img_id3
63
  dtype: string
64
+ split: ica
65
  - name: filename3
66
  dtype: string
67
+ split: ica
68
  - name: description3
69
  dtype: string
70
  split: ica
 
90
  path: data/ria-*
91
  - split: ica
92
  path: data/ica-*
 
93
  ---
94
 
95
  # MM-OPERA: Multi-Modal OPen-Ended Reasoning-guided Association Benchmark 🧠🌐
96
 
97
+ [Paper](https://huggingface.co/papers/2510.26937) | [Project Page](https://mm-opera-bench.github.io/) | [Code](https://github.com/MM-OPERA-Bench/MM-OPERA)
98
+
99
  ## Overview πŸ“–
100
 
101
  MM-OPERA is a benchmark designed to evaluate the open-ended association reasoning capabilities of Large Vision-Language Models (LVLMs). With 11,497 instances, it challenges models to identify and express meaningful connections across distant concepts in an open-ended format, mirroring human-like reasoning. The dataset spans diverse cultural, linguistic, and thematic contexts, making it a robust tool for advancing multimodal AI research. 🌍✨
 
169
  If you use this dataset in your work, please cite it as follows:
170
 
171
  ```bibtex
172
+ @inproceedings{huang2025mmopera,
173
+ title={{MM-OPERA: Benchmarking Open-ended Association Reasoning for Large Vision-Language Models}},
174
+ author={Zimeng Huang and Jinxin Ke and Xiaoxuan Fan and Yufeng Yang and Yang Liu and Liu Zhonghan and Zedi Wang and Junteng Dai and Haoyi Jiang and Yuyu Zhou and Keze Wang and Ziliang Chen},
175
+ booktitle={Advances in Neural Information Processing Systems 39},
176
+ year={2025}
 
 
 
 
177
  }
178
+ ```