AVSpeech Metadata Files
This repository contains the metadata CSV files for the AVSpeech dataset by Google Research.
Dataset Description
AVSpeech is a large-scale audio-visual speech dataset containing over 290,000 video segments from YouTube, designed for audio-visual speech recognition and lip reading research.
Files
avspeech_train.csv(128 MB) - Training set with 2,621,845 video segments from 270k videosavspeech_test.csv(9 MB) - Test set with video segments from a separate set of 22k videos
CSV Format
Each row contains:
YouTube ID, start_time, end_time, x_coordinate, y_coordinate
Where:
- YouTube ID: The YouTube video identifier
- start_time: Start time of the segment in seconds
- end_time: End time of the segment in seconds
- x_coordinate: X coordinate of the speaker's face center (normalized 0.0-1.0, 0.0 = left)
- y_coordinate: Y coordinate of the speaker's face center (normalized 0.0-1.0, 0.0 = top)
The train and test sets have disjoint speakers.
Usage
With Hugging Face Hub
from huggingface_hub import hf_hub_download
# Download train CSV
train_csv = hf_hub_download(
repo_id="bbrothers/avspeech-metadata",
filename="avspeech_train.csv",
repo_type="dataset"
)
# Download test CSV
test_csv = hf_hub_download(
repo_id="bbrothers/avspeech-metadata",
filename="avspeech_test.csv",
repo_type="dataset"
)
With our dataset loader
from ml.data.av_speech.dataset import AVSpeechDataset
# Initialize dataset (will auto-download CSVs if needed)
dataset = AVSpeechDataset()
# Download videos
dataset.download(
splits=['train', 'test'],
max_videos=100, # Or None for all videos
num_workers=4
)
Citation
If you use this dataset, please cite the original AVSpeech paper:
@inproceedings{ephrat2018looking,
title={Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation},
author={Ephrat, Ariel and Mosseri, Inbar and Lang, Oran and Dekel, Tali and Wilson, Kevin and Hassidim, Avinatan and Freeman, William T and Rubinstein, Michael},
booktitle={ACM SIGGRAPH 2018},
year={2018}
}
Links
Notes
- This repository only contains the metadata CSV files, not the actual video content
- Videos must be downloaded from YouTube using the provided YouTube IDs
- Some videos may no longer be available (deleted, private, or geo-blocked)
- Estimated total dataset size: ~4500 hours of video