File size: 2,699 Bytes
0691f76
 
 
 
 
 
 
564adc2
0691f76
 
 
564adc2
0691f76
 
 
 
 
4725807
0691f76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: apache-2.0
task_categories:
- question-answering
- multiple-choice
- video-text-to-text
---
# VidComposition Benchmark

[🖥 Project Page](https://yunlong10.github.io/VidComposition) | [🚀 Evaluation Space](https://huggingface.co/spaces/JunJiaGuo/VidComposition)

The advancement of Multimodal Large Language Models (MLLMs) has enabled significant progress in multimodal understanding, expanding their capacity to analyze video content. However, existing evaluation benchmarks for MLLMs primarily focus on abstract video comprehension, lacking a detailed assessment of their ability to understand video compositions, the nuanced interpretation of how visual elements combine and interact within highly compiled video contexts. We introduce **VidComposition**, a new benchmark specifically designed to evaluate the video composition understanding capabilities of MLLMs using carefully curated compiled videos and cinematic-level annotations. VidComposition includes 982 videos with 1706 multiple-choice questions, covering various compositional aspects such as camera movement, angle, shot size, narrative structure, character actions and emotions, etc. Our comprehensive evaluation of 33 open-source and proprietary MLLMs reveals a significant performance gap between human and model capabilities. This highlights the limitations of current MLLMs in understanding complex, compiled video compositions and offers insights into areas for further improvement.

---

## 📁 Dataset Format

Each item in the dataset is a JSON object structured as follows [[multi_choice.json](https://huggingface.co/datasets/JunJiaGuo/VidComposition_Benchmark/blob/main/multi_choice.json)]:

```json
{
  "video": "0SIK_5qpD70",
  "segment": "0SIK_5qpD70_183.3_225.5.mp4",
  "class": "background_perception",
  "question": "What is the main background in the video?",
  "options": {
    "A": "restaurant",
    "B": "hallway",
    "C": "grassland",
    "D": "wood"
  },
  "id": "1cad95c1-d13a-4ef0-b1c1-f7e753b5122f"
}
```

## 🧪 Evaluation
To evaluate your model on VidComposition, format your prediction file as follows:
```json
[
  {
    "id": "1cad95c1-d13a-4ef0-b1c1-f7e753b5122f",
    "model_answer": "A"
  },
  ...
]
```

## 📚 Citation

If you like this dataset, please cite the following paper:

```bibtex
@article{tang2024vidcompostion,
  title = {VidComposition: Can MLLMs Analyze Compositions in Compiled Videos?},
  author = {Tang, Yunlong and Guo, Junjia and Hua, Hang and Liang, Susan and Feng, Mingqian and Li, Xinyang and Mao, Rui and Huang, Chao and Bi, Jing and Zhang, Zeliang and Fazli, Pooyan and Xu, Chenliang},
  journal = {arXiv preprint arXiv:2411.10979},
  year = {2024}
}
```