|
|
--- |
|
|
task_categories: |
|
|
- automatic-speech-recognition |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
tags: |
|
|
- speaker-diarization |
|
|
- meeting-transcription |
|
|
- bilingual |
|
|
license: apache-2.0 |
|
|
--- |
|
|
|
|
|
# Dataset Card for Multi-Talker-SD |
|
|
|
|
|
|
|
|
### Dataset Description |
|
|
|
|
|
**Multi-Talker-SD** is a large-scale bilingual (English–Mandarin) multi-speaker meeting dataset designed to support research on **speaker diarization** and **meeting transcription**. |
|
|
|
|
|
- **Size:** 1,000 simulated meetings |
|
|
- **Participants per meeting:** 10–30 speakers |
|
|
- **Average duration:** ~20 minutes per meeting, up to one hour |
|
|
- **Languages:** English, Mandarin (code-switching possible) |
|
|
- **Audio characteristics:** realistic speaker overlap, turn-taking patterns, reverberation, and noise injection |
|
|
- **Metadata:** speaker gender, language, session type, utterance timing |
|
|
|
|
|
The audio is synthesized using utterances from **AIShell-1** (Mandarin) and **LibriSpeech** (English), with added noise and reverberation to approximate real meeting conditions. |
|
|
|
|
|
- **Curated by:** AISG Speech Lab |
|
|
- **License:** Apache-2.0 |
|
|
|
|
|
### Dataset Sources |
|
|
|
|
|
- **Repository:** [GitHub - Multi-Talker-SD](https://github.com/wyhzhen6/MULTI-TALKER-SD) |
|
|
- **Dataset on HF Hub:** [Multi-Talker-SD](https://huggingface.co/datasets/yihao005/Multi-Talker-SD) |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
- Research on **speaker diarization** under multilingual and overlapped speech conditions |
|
|
- **Meeting transcription** in bilingual settings |
|
|
- Controlled experiments on the effects of speaker metadata (gender, language, etc.) |
|
|
- Training and evaluation of **overlap-aware diarization models** |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
- **Audio files (.wav):** multi-speaker simulated meetings |
|
|
- **RTTM files:** diarization annotations with speaker labels and timestamps |
|
|
- **Metadata files:** speaker profiles including gender, language, and session type |
|
|
|
|
|
Each example contains: |
|
|
- Meeting ID |
|
|
- List of speakers (with attributes) |
|
|
- Audio waveform |
|
|
- RTTM segmentation |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Source Data |
|
|
|
|
|
- **English speech:** LibriSpeech |
|
|
- **Mandarin speech:** AIShell-1 |
|
|
- **Noise sources:** point-source and diffuse-field noise corpora |
|
|
- **Processing:** audio mixing, reverberation simulation, overlap control |
|
|
|
|
|
### Personal and Sensitive Information |
|
|
|
|
|
No personally identifiable or sensitive data is included. All speech is sourced from **public corpora**. |
|
|
|
|
|
|