Datasets:

ArXiv:
License:

Update dataset card for ReT-M2KR: Add task category, links, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +85 -16
README.md CHANGED
@@ -1,16 +1,31 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
- The dataset used to train and evaluate [ReT](https://www.arxiv.org/abs/2503.01980) for multimodal information retrieval. The dataset is almost the same as the original [M2KR](https://huggingface.co/datasets/BByrneLab/multi_task_multi_modal_knowledge_retrieval_benchmark_M2KR), with a few modifications:
6
- - we exlude any data from MSMARCO, as it does not contain query images;
7
- - we add passage images to OVEN, InfoSeek, E-VQA, and OKVQA. Refer to the paper for more details.
8
 
 
9
 
10
- ## Sources
11
- - **Repository:** https://github.com/aimagelab/ReT
12
- - **Paper:** [Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval](https://www.arxiv.org/abs/2503.01980) (CVPR 2025)
13
 
 
 
 
 
14
 
15
  ## Download images
16
  1. Initialize git LFS
@@ -35,20 +50,74 @@ cd ../images/evqa_kb
35
  cat evqa-kb-img-{00000..00241}.tar.gz | tar xzf -
36
  ```
37
 
 
38
 
39
- ## RAG - InfoSeek
40
- `jsonl/rag/kb_infoseek525k.jsonl` is the knowledge base used to execute experiments on Retrieval-Augmented Generation on the InfoSeek benchmark. The field `passage_image_path` contains a relative path to the Wikipedia image associated with a given passage. The Wikipedia images can be downloaded from the [OVEN](https://huggingface.co/datasets/ychenNLP/oven/blob/main/all_wikipedia_images.tar) repository.
41
 
42
- ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
 
 
45
 
46
- **BibTeX:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ```
48
- @inproceedings{caffagni2025recurrence,
49
- title={{Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval}},
50
- author={Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
51
- booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
52
- year={2025}
 
 
 
 
 
 
 
 
53
  }
54
  ```
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - visual-document-retrieval
5
+ language:
6
+ - en
7
+ tags:
8
+ - multimodal
9
+ - retrieval
10
+ - image-text-retrieval
11
+ - rag
12
+ - vqa
13
+ - m2kr
14
+ - m-beir
15
+ - vision-language
16
  ---
17
 
18
+ # ReT-M2KR Dataset
 
 
19
 
20
+ This repository contains the dataset used to train and evaluate **ReT-2: Recurrence Meets Transformers for Universal Multimodal Retrieval**. ReT-2 is a unified retrieval model designed to support multimodal queries (composed of images and text) and search across multimodal document collections where text and images coexist.
21
 
22
+ Paper: [Recurrence Meets Transformers for Universal Multimodal Retrieval](https://huggingface.co/papers/2509.08897)
23
+ Code: https://github.com/aimagelab/ReT-2
 
24
 
25
+ This dataset is a modified version of the original [M2KR](https://huggingface.co/datasets/BByrneLab/multi_task_multi_modal_knowledge_retrieval_benchmark_M2KR) benchmark, specifically adapted for ReT-2. The modifications include:
26
+ - Excluding any data from MSMARCO, as it does not contain query images.
27
+ - Adding passage images to OVEN, InfoSeek, E-VQA, and OKVQA.
28
+ Refer to the paper for more details.
29
 
30
  ## Download images
31
  1. Initialize git LFS
 
50
  cat evqa-kb-img-{00000..00241}.tar.gz | tar xzf -
51
  ```
52
 
53
+ ## Sample Usage
54
 
55
+ Here's an example of how to use ReT-2 with 🤗's Transformers to compute query-passage similarity:
 
56
 
57
+ ```python
58
+ from src.models import Ret2Model
59
+ import requests
60
+ from PIL import Image
61
+ from io import BytesIO
62
+ import torch
63
+ import torch.nn.functional as F
64
+
65
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
66
+
67
+ headers = {
68
+ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
69
+ }
70
+
71
+ query_img_url = 'https://upload.wikimedia.org/wikipedia/commons/8/84/Ghirlandina_%28Modena%29.jpg'
72
+ response = requests.get(query_img_url, headers=headers)
73
+ query_image = Image.open(BytesIO(response.content)).convert('RGB')
74
+ query_text = 'Where is this building located?'
75
+
76
+ passage_img_url = 'https://upload.wikimedia.org/wikipedia/commons/0/09/Absidi_e_Ghirlandina.jpg'
77
+ response = requests.get(query_img_url, headers=headers)
78
+ passage_image = Image.open(BytesIO(response.content)).convert('RGB')
79
+ passage_text = (
80
+ "The Ghirlandina is the bell tower of the Cathedral of Modena, in Modena, Italy. "
81
+ "It is 86.12 metres (282.7 ft) high and is the symbol of the city. "
82
+ "It was built in Romanesque style in the 12th century and is part of a UNESCO World Heritage Site."
83
+ )
84
+
85
+ model = Ret2Model.from_pretrained('aimagelab/ReT2-M2KR-ColBERT-SigLIP2-ViT-L', device_map=device)
86
 
87
+ query_txt_inputs = model.tokenizer([query_text], return_tensors='pt').to(device)
88
+ query_img_inputs = model.image_processor([query_image], return_tensors='pt').to(device)
89
+ passage_txt_inputs = model.tokenizer([passage_text], return_tensors='pt').to(device)
90
+ passage_img_inputs = model.image_processor([passage_image], return_tensors='pt').to(device)
91
 
92
+ with torch.inference_mode():
93
+ query_feats = model.get_ret_features(
94
+ input_ids=query_txt_inputs.input_ids,
95
+ attention_mask=query_txt_inputs.attention_mask,
96
+ pixel_values=query_img_inputs.pixel_values
97
+ )
98
+
99
+ passage_feats = model.get_ret_features(
100
+ input_ids=passage_txt_inputs.input_ids,
101
+ attention_mask=passage_txt_inputs.attention_mask,
102
+ pixel_values=passage_img_inputs.pixel_values
103
+ )
104
+
105
+ sim = F.normalize(query_feats, p=2, dim=-1) @ F.normalize(passage_feats, p=2, dim=-1).T
106
+
107
+ print(f"query-passage similarity: {sim.item():.3f}")
108
  ```
109
+
110
+ ## RAG - InfoSeek
111
+ `jsonl/rag/kb_infoseek525k.jsonl` is the knowledge base used to execute experiments on Retrieval-Augmented Generation on the InfoSeek benchmark. The field `passage_image_path` contains a relative path to the Wikipedia image associated with a given passage. The Wikipedia images can be downloaded from the [OVEN](https://huggingface.co/datasets/ychenNLP/oven/blob/main/all_wikipedia_images.tar) repository.
112
+
113
+ ## Citation
114
+
115
+ If you happen to use our work, please cite it with the following BibTeX:
116
+ ```bibtex
117
+ @article{caffagni2025recurrencemeetstransformers,
118
+ title={{Recurrence Meets Transformers for Universal Multimodal Retrieval}},
119
+ author={Davide Caffagni and Sara Sarto and Marcella Cornia and Lorenzo Baraldi and Rita Cucchiara},
120
+ journal={arXiv preprint arXiv:2509.08897},
121
+ year={2025}
122
  }
123
  ```