exiawsh commited on
Commit
ebc37e9
·
verified ·
1 Parent(s): 4771584

Add files using upload-large-folder tool

Browse files
.gitattributes CHANGED
@@ -9,7 +9,6 @@
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
10
  *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
  *.lz4 filter=lfs diff=lfs merge=lfs -text
12
- *.mds filter=lfs diff=lfs merge=lfs -text
13
  *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
  *.model filter=lfs diff=lfs merge=lfs -text
15
  *.msgpack filter=lfs diff=lfs merge=lfs -text
@@ -54,6 +53,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
54
  *.jpg filter=lfs diff=lfs merge=lfs -text
55
  *.jpeg filter=lfs diff=lfs merge=lfs -text
56
  *.webp filter=lfs diff=lfs merge=lfs -text
57
- # Video files - compressed
58
- *.mp4 filter=lfs diff=lfs merge=lfs -text
59
- *.webm filter=lfs diff=lfs merge=lfs -text
 
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
10
  *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
  *.lz4 filter=lfs diff=lfs merge=lfs -text
 
12
  *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
  *.model filter=lfs diff=lfs merge=lfs -text
14
  *.msgpack filter=lfs diff=lfs merge=lfs -text
 
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
README.md ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ extra_gated_prompt: >-
4
+ The LongVideoBench dataset contains links to web videos for data collection
5
+ purposes. LongVideoBench does not own the content linked within this dataset;
6
+ all rights and copyright belong to the respective channel owners. Ensuring
7
+ compliance with platform terms and conditions is the responsibility of these
8
+ source channels. By accessing this dataset, you acknowledge and agree to the
9
+ following terms:
10
+ extra_gated_fields:
11
+ I understand that LongVideoBench does not own the videos in this dataset: checkbox
12
+ I understand that LongVideoBench is not the creator of the videos in this dataset: checkbox
13
+ I understand that, LongVideoBench may modify/delete its contents subject to the requirements of the creators or source platforms: checkbox
14
+ I agree to use this dataset for non-commercial use ONLY: checkbox
15
+ I agree with the data license (CC-BY-NC-SA 4-0) for this dataset: checkbox
16
+ task_categories:
17
+ - multiple-choice
18
+ - visual-question-answering
19
+ language:
20
+ - en
21
+ tags:
22
+ - long video understanding
23
+ - long context
24
+ - multimodal
25
+ - neurips 2024
26
+ pretty_name: longvideobench
27
+ ---
28
+
29
+
30
+ ![](https://github.com/longvideobench/longvideobench.github.io/blob/main/logo.png?raw=true)
31
+
32
+
33
+ # Dataset Card for LongVideoBench
34
+
35
+ <!-- Provide a quick summary of the dataset. -->
36
+
37
+
38
+
39
+
40
+ Large multimodal models (LMMs) are handling increasingly longer and more complex inputs. However, few public benchmarks are available to assess these advancements. To address this, we introduce LongVideoBench, a question-answering benchmark with video-language interleaved inputs up to an hour long. It comprises 3,763 web-collected videos with subtitles across diverse themes, designed to evaluate LMMs on long-term multimodal understanding.
41
+
42
+ The main challenge that LongVideoBench targets is to accurately retrieve and reason over detailed information from lengthy inputs. We present a novel task called referring reasoning, where questions contain a referring query that references related video contexts, requiring the model to reason over these details.
43
+
44
+ LongVideoBench includes 6,678 human-annotated multiple-choice questions across 17 categories, making it one of the most comprehensive benchmarks for long-form video understanding. Evaluations show significant challenges even for advanced proprietary models (e.g., GPT-4o, Gemini-1.5-Pro, GPT-4-Turbo), with open-source models performing worse. Performance improves only when models process more frames, establishing LongVideoBench as a valuable benchmark for future long-context LMMs.
45
+
46
+
47
+ ## Dataset Details
48
+
49
+ ### Dataset Description
50
+
51
+ <!-- Provide a longer summary of what this dataset is. -->
52
+
53
+ - **Curated by:** LongVideoBench Team
54
+ - **Language(s) (NLP):** English
55
+ - **License:** CC-BY-NC-SA 4.0
56
+
57
+ ### Dataset Sources [optional]
58
+
59
+ <!-- Provide the basic links for the dataset. -->
60
+
61
+ - **Repository:** [https://github.com/longvideobench/LongVideoBench](https://github.com/longvideobench/LongVideoBench)
62
+ - **Homepage:** [https://longvideobench.github.io](https://longvideobench.github.io)
63
+ - **Leaderboard:** [https://huggingface.co/spaces/longvideobench/LongVideoBench](https://huggingface.co/spaces/longvideobench/LongVideoBench)
64
+
65
+ ## Leaderboard (until Oct. 14, 2024)
66
+
67
+ We rank models by Test Total Performance.
68
+
69
+ | Model | Test Total (5341) | Test 8s-15s | Test 15s-60s | Test 180s-600s | Test 900s-3600s | Val Total (1337) |
70
+ | --- | --- | --- | --- | --- | --- | --- |
71
+ | [GPT-4o (0513) (256)](https://platform.openai.com/docs/models/gpt-4o) | 66.7 | 71.6 | 76.8 | 66.7 | 61.6 | 66.7 |
72
+ | [Aria (256)](https://huggingface.co/rhymes-ai/Aria) | 65.0 | 69.4 | 76.6 | 64.6 | 60.1 | 64.2 |
73
+ | [LLaVA-Video-72B-Qwen2 (128)](https://huggingface.co/lmms-lab/LLaVA-Video-72B-Qwen2) | 64.9 | 72.4 | 77.4 | 63.9 | 59.3 | 63.9 |
74
+ | [Gemini-1.5-Pro (0514) (256)](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemini-1.5-pro-001) | 64.4 | 70.2 | 75.3 | 65.0 | 59.1 | 64.0 |
75
+ | [LLaVA-OneVision-QWen2-72B-OV (32)](https://huggingface.co/lmms-lab/llava-onevision-qwen2-72b-ov) | 63.2 | 74.3 | 77.4 | 61.6 | 56.5 | 61.3 |
76
+ | [LLaVA-Video-7B-Qwen2 (128)](https://huggingface.co/lmms-lab/LLaVA-Video-7B-Qwen2) | 62.7 | 69.7 | 76.5 | 62.1 | 56.6 | 61.1 |
77
+ | [Gemini-1.5-Flash (0514) (256)](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemini-1.5-flash-001) | 62.4 | 66.1 | 73.1 | 63.1 | 57.3 | 61.6 |
78
+ | [GPT-4-Turbo (0409) (256)](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) | 60.7 | 66.4 | 71.1 | 61.7 | 54.5 | 59.1 |
79
+ | [InternVL2-40B (16)](https://huggingface.co/OpenGVLab/InternVL2-40B) | 60.6 | 71.4 | 76.6 | 57.5 | 54.4 | 59.3 |
80
+ | [GPT-4o-mini (250)](https://platform.openai.com/docs/models/gpt-4o-mini) | 58.8 | 66.6 | 73.4 | 56.9 | 53.4 | 56.5 |
81
+ | [MiniCPM-V-2.6 (64)](https://huggingface.co/openbmb/MiniCPM-V-2_6) | 57.7 | 62.5 | 69.1 | 54.9 | 49.8 | 54.9 |
82
+ | [Qwen2-VL-7B (256)](https://huggingface.co/openbmb/MiniCPM-V-2_6) | 56.8 | 60.1 | 67.6 | 56.7 | 52.5 | 55.6 |
83
+ | [Kangaroo (64)](https://huggingface.co/KangarooGroup/kangaroo) | 54.8 | 65.6 | 65.7 | 52.7 | 49.1 | 54.2 |
84
+ | [PLLaVA-34B (32)](https://github.com/magic-research/PLLaVA) | 53.5 | 60.1 | 66.8 | 50.8 | 49.1 | 53.2 |
85
+ | [InternVL-Chat-V1-5-26B (16)](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5) | 51.7 | 61.3 | 62.7 | 49.5 | 46.6 | 51.2 |
86
+ | [LLaVA-Next-Video-34B (32)](https://llava-vl.github.io/blog/2024-04-30-llava-next-video/) | 50.5 | 57.6 | 61.6 | 48.7 | 45.9 | 50.5 |
87
+ | [Phi-3-Vision-Instruct (16)](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) | 49.9 | 58.3 | 59.6 | 48.4 | 45.1 | 49.6 |
88
+ | [Idefics2 (16)](https://huggingface.co/HuggingFaceM4/idefics2-8b) | 49.4 | 57.4 | 60.4 | 47.3 | 44.7 | 49.7 |
89
+ | [Mantis-Idefics2 (16)](https://huggingface.co/TIGER-Lab/Mantis-8B-Idefics2) | 47.6 | 56.1 | 61.4 | 44.6 | 42.5 | 47.0 |
90
+ | [LLaVA-Next-Mistral-7B (8)](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) | 47.1 | 53.4 | 57.2 | 46.9 | 42.1 | 49.1 |
91
+ | [PLLaVA-13B (32)](https://github.com/magic-research/PLLaVA) | 45.1 | 52.9 | 54.3 | 42.9 | 41.2 | 45.6 |
92
+ | [InstructBLIP-T5-XXL (8)](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip) | 43.8 | 48.1 | 50.1 | 44.5 | 40.0 | 43.3 |
93
+ | [Mantis-BakLLaVA (16)](https://huggingface.co/TIGER-Lab/Mantis-bakllava-7b) | 43.7 | 51.3 | 52.7 | 41.1 | 40.1 | 43.7 |
94
+ | [BLIP-2-T5-XXL (8)](https://github.com/salesforce/LAVIS/tree/main/projects/blip2) | 43.5 | 46.7 | 47.4 | 44.2 | 40.9 | 42.7 |
95
+ | [LLaVA-Next-Video-M7B (32)](https://llava-vl.github.io/blog/2024-04-30-llava-next-video/) | 43.5 | 50.9 | 53.1 | 42.6 | 38.9 | 43.5 |
96
+ | [LLaVA-1.5-13B (8)](https://huggingface.co/llava-hf/llava-1.5-13b-hf) | 43.1 | 49.0 | 51.1 | 41.8 | 39.6 | 43.4 |
97
+ | [ShareGPT4Video (16)](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4Video) | 41.8 | 46.9 | 50.1 | 40.0 | 38.7 | 39.7 |
98
+ | [VideoChat2 (Mistral-7B) (16)](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2) | 41.2 | 49.3 | 49.3 | 39.0 | 37.5 | 39.3 |
99
+ | [LLaVA-1.5-7B (8)](https://huggingface.co/llava-hf/llava-1.5-7b-hf) | 40.4 | 45.0 | 47.4 | 40.1 | 37.0 | 40.3 |
100
+ | [mPLUG-Owl2 (8)](https://github.com/X-PLUG/mPLUG-Owl/tree/main/mPLUG-Owl2) | 39.4 | 49.4 | 47.3 | 38.7 | 34.3 | 39.1 |
101
+ | [PLLaVA-7B (32)](https://github.com/magic-research/PLLaVA) | 39.2 | 45.3 | 47.3 | 38.5 | 35.2 | 40.2 |
102
+ | [VideoLLaVA (8)](https://github.com/PKU-YuanGroup/Video-LLaVA/) | 37.6 | 43.1 | 44.6 | 36.4 | 34.4 | 39.1 |
103
+ | [VideoChat2 (Vicuna 7B) (16)](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2) | 35.1 | 38.1 | 40.5 | 33.5 | 33.6 | 36.0 |
104
+
105
+
106
+ ## Uses
107
+
108
+ <!-- Address questions around how the dataset is intended to be used. -->
109
+
110
+ 1. Download the dataset via Hugging Face Client:
111
+
112
+ ```shell
113
+ huggingface-cli download longvideobench/LongVideoBench --repo-type dataset --local-dir LongVideoBench --local-dir-use-symlinks False
114
+ ```
115
+
116
+ 2. Extract from the `.tar` files:
117
+
118
+ ```shell
119
+ cat videos.tar.part.* > videos.tar
120
+ tar -xvf videos.tar
121
+ tar -xvf subtitles.tar
122
+ ```
123
+
124
+ 3. Use the [LongVideoBench] dataloader to load the data from raw MP4 files and subtitles:
125
+
126
+ - (a) Install the dataloader:
127
+
128
+ ```shell
129
+ git clone https://github.com/LongVideoBench/LongVideoBench.git
130
+ cd LongVideoBench
131
+ pip install -e .
132
+ ```
133
+ - (b) Load the dataset in python scripts:
134
+
135
+ ```python
136
+ from longvideobench import LongVideoBenchDataset
137
+
138
+ # validation
139
+ dataset = LongVideoBenchDataset(YOUR_DATA_PATH, "lvb_val.json", max_num_frames=64)
140
+
141
+ # test
142
+ dataset = LongVideoBenchDataset(YOUR_DATA_PATH, "lvb_test_wo_gt.json", max_num_frames=64)
143
+
144
+ print(dataset[0]["inputs"]) # A list consisting of PIL.Image and strings.
145
+ ```
146
+
147
+ The "inputs" are interleaved video frames and text subtitles, followed by questions and option prompts. You can then convert them to the format that your LMMs can accept.
148
+
149
+
150
+ ### Direct Use
151
+
152
+ <!-- This section describes suitable use cases for the dataset. -->
153
+
154
+ This dataset is meant to evaluate LMMs on video understanding and long-context understanding abilities.
155
+
156
+ ### Out-of-Scope Use
157
+
158
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
159
+
160
+ We do not advise to use this dataset for training.
161
+
162
+ ## Dataset Structure
163
+
164
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
165
+
166
+ - `lvb_val.json`: Validation set annotations.
167
+
168
+ - `lvb_test_wo_gt.json`: Test set annotations. Correct choice is not provided.
169
+
170
+ - `videos.tar.*`: Links to Videos.
171
+
172
+ - `subtitles.tar`: Links to Subtitles.
173
+
174
+
175
+ ## Dataset Card Contact
176
+
177
178
+
179
+
180
+ ```
181
+ @misc{wu2024longvideobenchbenchmarklongcontextinterleaved,
182
+ title={LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding},
183
+ author={Haoning Wu and Dongxu Li and Bei Chen and Junnan Li},
184
+ year={2024},
185
+ eprint={2407.15754},
186
+ archivePrefix={arXiv},
187
+ primaryClass={cs.CV},
188
+ url={https://arxiv.org/abs/2407.15754},
189
+ }
190
+ ```
lvb_test_wo_gt.json ADDED
The diff for this file is too large to render. See raw diff
 
lvb_val.json ADDED
The diff for this file is too large to render. See raw diff
 
subtitles.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:165dc82c8902459247245cf693237ad9fb2ff1b4e0df89ccce975fa67af04149
3
+ size 117381120
test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9015fc7f9e83e4a6479fa533c2c7df8c42a9bab67a7e01be6320262e249a1bec
3
+ size 1612265
validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f370287edbfb9875562b116686d50208d0d40615f5b407f970448868b6bc108
3
+ size 426680
videos.tar.part.aa ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:acfcf40993e68a1a2430d8a8bb0c3e1846980de8a3fdfdc6c6454b70a8800f09
3
+ size 5242880000
videos.tar.part.ab ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:429a049da7e0b94b9cfed2a54fe009cca796d407465315313f72ccdc733408dc
3
+ size 5242880000
videos.tar.part.ac ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:466f8e9713b080c78407e1c4616a43ff7922c512371ccc4af659181e5625c893
3
+ size 5242880000
videos.tar.part.ad ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c99f1bd5f8613814f8b58126cbf432c8bf4a6294e62894041e9da5fa92cb7193
3
+ size 5242880000
videos.tar.part.ae ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d581d4df74e307162c824e0c8c72a2bac1c049637fc6d9db72c7ea080ccc3da
3
+ size 5242880000
videos.tar.part.af ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:baa1b7c7975b5057e405b4586d25ffac0f575063cbb3ba3feb2ef49aa1a70f22
3
+ size 5242880000
videos.tar.part.ag ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58fd65d6908a491ca6c950f6e6a7c7231ff43276c93c9d3dc679668668f321f3
3
+ size 5242880000
videos.tar.part.ah ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76e46094fd3d9aedf71318bb17705dbf67a78bfefe8b6777a0463264cb97a22a
3
+ size 5242880000
videos.tar.part.ai ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dba25e528ebc3fba8987cacbbcd9f11fe92d633608a79088c0e774f78d3a196f
3
+ size 5242880000
videos.tar.part.aj ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:284b4cced983b29f08638c2d02ad6bb12292ab991f3ad3d94d5420ec94641054
3
+ size 5242880000
videos.tar.part.ak ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a1744b04dedeb74a15dc6e9fc165a57d8bae63e737d0d7a9defe962da525867
3
+ size 5242880000
videos.tar.part.al ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59a73b64d73dce5f6f05e177d68a798d856b48bf39a6b1b4526206e80d17e2fe
3
+ size 5242880000
videos.tar.part.am ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c32b32bddf8d818c1f3236a427106da9c208d1fe64cd85ae9f9786a53d47a9a3
3
+ size 5242880000
videos.tar.part.an ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc94e784af7fa8afa12b8963367d582b062f58e311aa63ddceaf0a16b71c11a7
3
+ size 5242880000
videos.tar.part.ao ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92c0eca266c32cf78a85fe9754c083c5a272339df50eef893624058f34ccce08
3
+ size 5242880000
videos.tar.part.ap ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80d882d813d066db6b2be2c06ef0f15246b8cd3817b0a6331e2d27e8a6ec22b2
3
+ size 5242880000
videos.tar.part.aq ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0aff4ad87f2dff8cd50c336fcf7956828c650d7ed7b3f6bf870e46281a5e2301
3
+ size 5242880000
videos.tar.part.ar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e01ff2c3308c65313c0caf8328b2720a3aaff8918e4dd8213a2f1c767d93371e
3
+ size 5242880000
videos.tar.part.as ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f9121a55ca40336ffe44eeffe6c181662dbe7078c0862d3b4bf61a7eef0d68f
3
+ size 5242880000
videos.tar.part.at ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b21d7095b0ca2116585f51ce8226a467022015a917501478b8879bbd87dfcf80
3
+ size 5242880000
videos.tar.part.au ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23e2110cf968ab7c61f63eef655bcb5fd9af4fa5dc0364acdf92747b4dfc2909
3
+ size 5242880000
videos.tar.part.av ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b867c381719b699b9782bb06b6b1e4796ff01af7e45d5ea6fd70406b111e6429
3
+ size 5242880000
videos.tar.part.aw ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5fd34c6dd0e2b0bc85dce3e60c8ca79b46def274e4ebdb9cd4fac519476a81b
3
+ size 5242880000
videos.tar.part.ax ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d179c8ba172ce5d1bab1935f82362efc4ad29eb5246611b3afc5b0b38bc4679
3
+ size 5242880000
videos.tar.part.ay ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b609a96bb48e36419e578efd9c5ae2b7bb92f2b7392459debf2ba6fa9005981b
3
+ size 5242880000
videos.tar.part.az ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64a7eadddedc3759d277c178424b18d8e1097ae5569e18790e15d0a464c0576d
3
+ size 5242880000
videos.tar.part.ba ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdfed5e1473f9af45aa21d958a9c7ee8d7ca7e3234921ac335110a076469e1d5
3
+ size 5242880000
videos.tar.part.bb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d09c8657bafa08a12f148a4bb08dfc4ee5ba84853ad1ccab7640f3c39950a540
3
+ size 5242880000
videos.tar.part.bc ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d87a42102df78d4e42bcd7872365758281953027b8f8860fcc5f73bc49f74fc1
3
+ size 5242880000
videos.tar.part.bd ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6cb43a2525d9b90e35fdacb75119657b98628f4d59a59f380007937d71277b80
3
+ size 5242880000
videos.tar.part.be ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e06203c3cbf223a0feae3e44a4cf49db18493d85abcaf96e5b567230dd383f1
3
+ size 4277780480