Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 5 new columns ({'0.849219', '0.305556', '111.240000', 'u5MPyrRJPmc', '108.240000'}) and 5 missing columns ({'233.266000', '0.780469', '0.670833', 'CJoOwXcjhds', '239.367000'}).

This happened while the csv dataset builder was generating data using

hf://datasets/bbrothers/avspeech-metadata/avspeech_test.csv (at revision c51a0db620e40bb0552c0d3fc1f13d68e93e5f95)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              u5MPyrRJPmc: string
              108.240000: double
              111.240000: double
              0.849219: double
              0.305556: double
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 861
              to
              {'CJoOwXcjhds': Value('string'), '233.266000': Value('float64'), '239.367000': Value('float64'), '0.780469': Value('float64'), '0.670833': Value('float64')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1455, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1054, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 5 new columns ({'0.849219', '0.305556', '111.240000', 'u5MPyrRJPmc', '108.240000'}) and 5 missing columns ({'233.266000', '0.780469', '0.670833', 'CJoOwXcjhds', '239.367000'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/bbrothers/avspeech-metadata/avspeech_test.csv (at revision c51a0db620e40bb0552c0d3fc1f13d68e93e5f95)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

CJoOwXcjhds
string
233.266000
float64
239.367000
float64
0.780469
float64
0.670833
float64
AvWWVOgaMlk
90
93.566667
0.586719
0.311111
Y8HMIm8mdns
171.607767
174.607767
0.505729
0.240741
akwvpAiLFk0
144.68
150
0.698438
0.288889
Swss72CHSWg
90.023267
97.2972
0.230729
0.20463
ymD5uLlLc0g
36.033
40.9
0.341667
0.475926
DuWE-CQDlEk
211.266667
221.7
0.788281
0.401389
uCGKDCWxqxo
16.56
30
0.420833
0.482407
-A9gdf3j2xo
295.165
298.165
0.507812
0.233333
labiHToR5nk
266.52
269.52
0.522656
0.243056
xbgfxIc-nbs
116.149367
119.986533
0.504167
0.45
QoQF8N5ZsQA
240.006433
244.961389
0.450781
0.358333
307DK9nGQhw
64.097367
73.006267
0.327604
0.17963
5qy9Ujv9XdM
61.494767
65.331933
0.557813
0.392593
UBL2Vowiulk
30.113422
33.616922
0.725
0.393056
LcyrfLT2tto
95.5985
98.5985
0.471094
0.409722
sujFCXbYkMo
30
34.466667
0.528906
0.477778
R0u9E8GsUXk
114.466667
118.166667
0.879687
0.829167
bNxD_breZy8
198.734
201.734
0.539062
0.240278
AQDWwktBhaw
30.03
40.807433
0.469531
0.293056
Dtn8xZ3BiGY
114.31
119.828
0.489063
0.316667
Cy9SUMj5wGY
16.133333
30
0.789062
0.598611
8nQBG5hvjpk
283.286
286.286
0.792188
0.348611
rCp8Jae81KU
257.6
260.6
0.711979
0.374074
PmD-LzPS2rg
282.5823
285.5853
0.371875
0.425
BrCcDt6GNkk
281.333333
284.466667
0.55
0.244444
IrXrbrZWflA
203.169
209.976
0.416146
0.271296
512K2S3De-A
282.6824
288.855233
0.514844
0.338889
01qWxISqaHg
87
90
0.46875
0.303977
2f32XSMYlDk
73.440033
77.844433
0.150521
0.163889
lcClO5lHEjA
120.086633
124.991533
0.439063
0.348611
q1doqKlHRuY
18.9
23.333333
0.102083
0.834259
tdTXVU5wN8I
188.32
199.28
0.492188
0.421296
h-fSfAFufCo
153.9538
160.226733
0.496354
0.378704
srwckJKdeS0
174.36
177.44
0.51875
0.702778
DIWf1t-HzwI
74.207467
77.210467
0.471875
0.405556
YXcVkIEMGds
295.08
300
0.390104
0.198148
BsUzOhJ9WGU
53.053
59.993
0.497396
0.45
JjduaMIoKvI
266.9697
269.9697
0.476562
0.366667
IlnHVjvBDU0
275.76
288.16
0.475
0.304167
zFH0QbS-l-w
287.153533
291.557933
0.53125
0.418056
h5wT_c4fQ1o
168.201367
179.9798
0.525521
0.27037
3AsPqH3QaQM
273.439833
282.949333
0.524219
0.259722
aaEA__Js2u0
270
273.266667
0.869531
0.7875
7rYeSDHS0U0
120
134.28
0.485156
0.276389
qpEzCs23PWE
134.100633
144.110633
0.501042
0.303704
8E2UlNrLNmk
50.721
53.721
0.4625
0.323148
BjvtZkHWExY
79.370958
89.506083
0.314063
0.197222
qzM4wshoqGs
91.36
94.36
0.555729
0.348148
871zAw-g1ZM
120.566667
124.133333
0.526563
0.386111
JpJoybtabbU
186.866667
190.566667
0.254167
0.465741
lUdymhI3Zl4
203
208.4
0.520833
0.403704
RdUVaYI3bmg
120.522522
127.861611
0.517708
0.09537
Uu1xVo0CF5o
106.5064
119.986533
0.552604
0.285185
WfpZPDqNNg0
260.433
269.999
0.723958
0.248148
G7xm-5aDZyg
114.52
120
0.486458
0.234259
jdshBkVfjrA
293.852
299.975
0.433594
0.476389
3gcWAZSNi2E
30.997633
38.204833
0.598958
0.317593
Wl3HSpsiIb4
229.646089
235.360122
0.627344
0.226389
yeqK6kqoIYk
221.2
227.68
0.306792
0.295833
YWgXhe7JYp4
144.602789
149.899756
0.835417
0.151852
ReXQGb2k3fo
11.166667
22.2
0.417187
0.429167
2RvWHWhyx1w
56.993267
59.993267
0.527344
0.375
AOoqrXx5BNU
164.3
177.267
0.541146
0.544444
_8K1hWkirLo
90
96.64
0.494271
0.300926
cPUBnjqIaXI
189.933333
195.466667
0.797656
0.443056
342Pxxa7n8Q
253.2
256.56
0.308854
0.371296
umcJyBaatBs
105.533333
119.466667
0.607031
0.408333
QYnLgIsR3bc
180.5804
184.150633
0.492969
0.277778
ohd_xOV6zW4
56.993
59.993
0.515104
0.469444
DxpQmBfA6vM
83.086
86.086
0.314583
0.442593
xwxbJkXRJHw
247.64
254.12
0.455469
0.201389
Gqt1A6O6UTk
66.3
69.366
0.759375
0.181944
YOySUCOJUtQ
127.994533
141.107633
0.566406
0.240278
ft3fl0x3gFo
232.966
240
0.520312
0.445833
7DBdAnTuw5c
61.394667
66.524789
0.503125
0.242593
ibPOxQ7XYPk
30.16
37.44
0.478125
0.263889
ausPEm5ZWQE
76.916667
90
0.39604
0.286111
a2iQ7kB5b6s
136.28
140.76
0.517188
0.446296
WddCvVatDlo
240
251.3
0.6375
0.558333
04BgTYq4Ckk
273.25
284.042
0.539844
0.452778
wBDD5wTG7P0
200.866
207.766
0.432031
0.201389
6LapKTptu8w
162.033
172.799
0.517188
0.216667
gX8qtrFaLs4
120.9238
123.9238
0.496354
0.306481
XXtkzCkzA64
106.92
119.92
0.608333
0.313889
TH0xt8XIvVs
120.007
134.352
0.549219
0.295833
H2VCPO0isFQ
31.489789
35.744044
0.817708
0.127778
aVP0dc3dvtA
143.41
149.984
0.485938
0.27963
5fnzANZN-O4
120.125
135.058333
0.558333
0.220833
alHB2f34oRI
115.482033
119.586133
0.568229
0.280556
98iIikUQgWQ
233.666
239.989
0.522656
0.376389
81jh1rIVC0g
257.96
263.08
0.5
0.249074
UVDnhj-jZl0
106.2
110.333333
0.875
0.854167
awqKt7frvJI
166.36
169.36
0.549219
0.25
RfP2dOHPej8
232.498933
239.872967
0.710156
0.347222
GtM2sM6r3So
124.257467
131.9318
0.483594
0.341667
k3NzaNrdALo
74.066667
77.166667
0.699219
0.358333
vK-snIInirc
138.1
149.566667
0.825521
0.4375
r3N3LCHqjdI
164.430933
176.860022
0.457292
0.268519
gOWL5FwU4c4
37.68
42.8
0.496868
0.354167
MZnQ3eZuUAE
241.96
245.24
0.651563
0.441667
End of preview.
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

AVSpeech Metadata Files

This repository contains the metadata CSV files for the AVSpeech dataset by Google Research.

Dataset Description

AVSpeech is a large-scale audio-visual speech dataset containing over 290,000 video segments from YouTube, designed for audio-visual speech recognition and lip reading research.

Files

  • avspeech_train.csv (128 MB) - Training set with 2,621,845 video segments from 270k videos
  • avspeech_test.csv (9 MB) - Test set with video segments from a separate set of 22k videos

CSV Format

Each row contains:

YouTube ID, start_time, end_time, x_coordinate, y_coordinate

Where:

  • YouTube ID: The YouTube video identifier
  • start_time: Start time of the segment in seconds
  • end_time: End time of the segment in seconds
  • x_coordinate: X coordinate of the speaker's face center (normalized 0.0-1.0, 0.0 = left)
  • y_coordinate: Y coordinate of the speaker's face center (normalized 0.0-1.0, 0.0 = top)

The train and test sets have disjoint speakers.

Usage

With Hugging Face Hub

from huggingface_hub import hf_hub_download

# Download train CSV
train_csv = hf_hub_download(
    repo_id="bbrothers/avspeech-metadata",
    filename="avspeech_train.csv",
    repo_type="dataset"
)

# Download test CSV
test_csv = hf_hub_download(
    repo_id="bbrothers/avspeech-metadata",
    filename="avspeech_test.csv",
    repo_type="dataset"
)

With our dataset loader

from ml.data.av_speech.dataset import AVSpeechDataset

# Initialize dataset (will auto-download CSVs if needed)
dataset = AVSpeechDataset()

# Download videos
dataset.download(
    splits=['train', 'test'],
    max_videos=100,  # Or None for all videos
    num_workers=4
)

Citation

If you use this dataset, please cite the original AVSpeech paper:

@inproceedings{ephrat2018looking,
  title={Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation},
  author={Ephrat, Ariel and Mosseri, Inbar and Lang, Oran and Dekel, Tali and Wilson, Kevin and Hassidim, Avinatan and Freeman, William T and Rubinstein, Michael},
  booktitle={ACM SIGGRAPH 2018},
  year={2018}
}

Links

Notes

  • This repository only contains the metadata CSV files, not the actual video content
  • Videos must be downloaded from YouTube using the provided YouTube IDs
  • Some videos may no longer be available (deleted, private, or geo-blocked)
  • Estimated total dataset size: ~4500 hours of video
Downloads last month
11