Datasets:

ArXiv:
License:
ReT-M2KR / README.md
dcaffo's picture
Update README.md
96a4181 verified
metadata
license: mit

The dataset used to train and evaluate ReT for multimodal information retrieval. The dataset is almost the same as the original M2KR, with a few modifications:

  • we exlude any data from MSMARCO, as it does not contain query images;
  • we add passage images to OVEN, InfoSeek, E-VQA, and OKVQA. Refer to the paper for more details.

Sources

ReT Code

! Update 12/09/2025
We have just released ReT-2: Recurrence Meets Transformers for Universal Multimodal Retrieval
ReT-2 Code

Download images

  1. Initialize git LFS
git lfs install
  1. Clone the repository (it will take a lot)
git clone https://huggingface.co/datasets/aimagelab/ReT-M2KR
cd ReT-M2KR
  1. Decompress images (it will take a lot, again)
# M2KR images
cd images/m2kr
cat ret-img-{000..129}.tar.gz | tar xzf -

# Encyclopedi-VQA knowledge base images
cd ../images/evqa_kb
cat evqa-kb-img-{00000..00241}.tar.gz | tar xzf -

RAG - InfoSeek

jsonl/rag/kb_infoseek525k.jsonl is the knowledge base used to execute experiments on Retrieval-Augmented Generation on the InfoSeek benchmark. The field passage_image_path contains a relative path to the Wikipedia image associated with a given passage. The Wikipedia images can be downloaded from the OVEN repository.

Citation

BibTeX:

@inproceedings{caffagni2025recurrence,
  title={{Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval}},
  author={Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2025}
}