File size: 2,745 Bytes
bdb4017 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
# AVSpeech Metadata Files
This repository contains the metadata CSV files for the [AVSpeech dataset](https://research.google.com/avspeech/) by Google Research.
## Dataset Description
AVSpeech is a large-scale audio-visual speech dataset containing over 290,000 video segments from YouTube, designed for audio-visual speech recognition and lip reading research.
## Files
- `avspeech_train.csv` (128 MB) - Training set with 2,621,845 video segments from 270k videos
- `avspeech_test.csv` (9 MB) - Test set with video segments from a separate set of 22k videos
## CSV Format
Each row contains:
```
YouTube ID, start_time, end_time, x_coordinate, y_coordinate
```
Where:
- **YouTube ID**: The YouTube video identifier
- **start_time**: Start time of the segment in seconds
- **end_time**: End time of the segment in seconds
- **x_coordinate**: X coordinate of the speaker's face center (normalized 0.0-1.0, 0.0 = left)
- **y_coordinate**: Y coordinate of the speaker's face center (normalized 0.0-1.0, 0.0 = top)
The train and test sets have disjoint speakers.
## Usage
### With Hugging Face Hub
```python
from huggingface_hub import hf_hub_download
# Download train CSV
train_csv = hf_hub_download(
repo_id="bbrothers/avspeech-metadata",
filename="avspeech_train.csv",
repo_type="dataset"
)
# Download test CSV
test_csv = hf_hub_download(
repo_id="bbrothers/avspeech-metadata",
filename="avspeech_test.csv",
repo_type="dataset"
)
```
### With our dataset loader
```python
from ml.data.av_speech.dataset import AVSpeechDataset
# Initialize dataset (will auto-download CSVs if needed)
dataset = AVSpeechDataset()
# Download videos
dataset.download(
splits=['train', 'test'],
max_videos=100, # Or None for all videos
num_workers=4
)
```
## Citation
If you use this dataset, please cite the original AVSpeech paper:
```bibtex
@inproceedings{ephrat2018looking,
title={Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation},
author={Ephrat, Ariel and Mosseri, Inbar and Lang, Oran and Dekel, Tali and Wilson, Kevin and Hassidim, Avinatan and Freeman, William T and Rubinstein, Michael},
booktitle={ACM SIGGRAPH 2018},
year={2018}
}
```
## Links
- [AVSpeech Official Page](https://research.google.com/avspeech/)
- [Original Paper](https://arxiv.org/abs/1804.03619)
- [Our GitHub Repository](https://github.com/Pierre-LouisBJT/interconnect)
## Notes
- This repository only contains the metadata CSV files, not the actual video content
- Videos must be downloaded from YouTube using the provided YouTube IDs
- Some videos may no longer be available (deleted, private, or geo-blocked)
- Estimated total dataset size: ~4500 hours of video
|