Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- question-answering
|
| 5 |
+
- multiple-choice
|
| 6 |
+
- video-text-to-text
|
| 7 |
+
---
|
| 8 |
+
# VidComposition Dataset
|
| 9 |
+
|
| 10 |
+
[🖥 Project Page](https://yunlong10.github.io/VidComposition) | [🚀 Evaluation Space](https://huggingface.co/spaces/JunJiaGuo/VidComposition)
|
| 11 |
+
|
| 12 |
+
**VidComposition** is a video-language benchmark designed to evaluate the **temporal** and **compositional reasoning** capabilities of models through multiple-choice question answering based on short video segments.
|
| 13 |
+
|
| 14 |
+
This dataset release includes annotated video segments, reasoning-type labels, and multiple-choice QA pairs.
|
| 15 |
+
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
## 📁 Dataset Format
|
| 19 |
+
|
| 20 |
+
Each item in the dataset is a JSON object structured as follows:
|
| 21 |
+
|
| 22 |
+
```json
|
| 23 |
+
{
|
| 24 |
+
"video": "0SIK_5qpD70",
|
| 25 |
+
"segment": "0SIK_5qpD70_183.3_225.5.mp4",
|
| 26 |
+
"class": "background_perception",
|
| 27 |
+
"question": "What is the main background in the video?",
|
| 28 |
+
"options": {
|
| 29 |
+
"A": "restaurant",
|
| 30 |
+
"B": "hallway",
|
| 31 |
+
"C": "grassland",
|
| 32 |
+
"D": "wood"
|
| 33 |
+
},
|
| 34 |
+
"id": "1cad95c1-d13a-4ef0-b1c1-f7e753b5122f"
|
| 35 |
+
}
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
## 🧪 Evaluation
|
| 39 |
+
To evaluate your model on VidComposition, format your prediction file as follows:
|
| 40 |
+
```json
|
| 41 |
+
[
|
| 42 |
+
{
|
| 43 |
+
"id": "1cad95c1-d13a-4ef0-b1c1-f7e753b5122f",
|
| 44 |
+
"model_answer": "A"
|
| 45 |
+
},
|
| 46 |
+
...
|
| 47 |
+
]
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
## 📚 Citation
|
| 51 |
+
|
| 52 |
+
If you like this dataset, please cite the following paper:
|
| 53 |
+
|
| 54 |
+
```bibtex
|
| 55 |
+
@article{tang2024vidcompostion,
|
| 56 |
+
title = {VidComposition: Can MLLMs Analyze Compositions in Compiled Videos?},
|
| 57 |
+
author = {Tang, Yunlong and Guo, Junjia and Hua, Hang and Liang, Susan and Feng, Mingqian and Li, Xinyang and Mao, Rui and Huang, Chao and Bi, Jing and Zhang, Zeliang and Fazli, Pooyan and Xu, Chenliang},
|
| 58 |
+
journal = {arXiv preprint arXiv:2411.10979},
|
| 59 |
+
year = {2024}
|
| 60 |
+
}
|
| 61 |
+
```
|