KazEmoTTS
⌨️ 😐 😠 🙂 😞 😱 😮 🗣
This repository provides a dataset and a text-to-speech (TTS) model for the paper
KazEmoTTS:
A Dataset for Kazakh Emotional Text-to-Speech Synthesis
Summary:
This study focuses on the creation of the KazEmoTTS dataset, designed for emotional Kazakh text-to-speech (TTS) applications. KazEmoTTS is a collection of 54,760 audio-text pairs, with a total duration of 74.85 hours, featuring 34.23 hours delivered by a female narrator and 40.62 hours by two male narrators. The list of the emotions considered include “neutral”, “angry”, “happy”, “sad”, “scared”, and “surprised”. We also developed a TTS model trained on the KazEmoTTS dataset. Objective and subjective evaluations were employed to assess the quality of synthesized speech, yielding an MCD score within the range of 6.02 to 7.67, alongside a MOS that spanned from 3.51 to 3.57. To facilitate reproducibility and inspire further research, we have made our code, pre-trained model, and dataset accessible in our GitHub repository.
Dataset Statistics 📊
| Emotion | # recordings | Narrator F1 | Narrator M1 | Narrator M2 | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Total (h) | Mean (s) | Min (s) | Max (s) | Total (h) | Mean (s) | Min (s) | Max (s) | Total (h) | Mean (s) | Min (s) | Max (s) | ||
| neutral | 9,385 | 5.85 | 5.03 | 1.03 | 15.51 | 4.54 | 4.77 | 0.84 | 16.18 | 2.30 | 4.69 | 1.02 | 15.81 |
| angry | 9,059 | 5.44 | 4.78 | 1.11 | 14.09 | 4.27 | 4.75 | 0.93 | 17.03 | 2.31 | 4.81 | 1.02 | 15.67 |
| happy | 9,059 | 5.77 | 5.09 | 1.07 | 15.33 | 4.43 | 4.85 | 0.98 | 15.56 | 2.23 | 4.74 | 1.09 | 15.25 |
| sad | 8,980 | 5.60 | 5.04 | 1.11 | 15.21 | 4.62 | 5.13 | 0.72 | 18.00 | 2.65 | 5.52 | 1.16 | 18.16 |
| scared | 9,098 | 5.66 | 4.96 | 1.00 | 15.67 | 4.13 | 4.51 | 0.65 | 16.11 | 2.34 | 4.96 | 1.07 | 14.49 |
| surprised | 9,179 | 5.91 | 5.09 | 1.09 | 14.56 | 4.52 | 4.92 | 0.81 | 17.67 | 2.28 | 4.87 | 1.04 | 15.81 |
| Narrator | # recordings | Duration (h) |
|---|---|---|
| F1 | 24,656 | 34.23 |
| M1 | 19,802 | 26.51 |
| M2 | 10,302 | 14.11 |
| Total | 54,760 | 74.85 |
Synthesized samples 🔈
You can listen to some synthesized samples here.
Citation 🎓
We kindly urge you, if you incorporate our dataset and/or model into your work, to cite our paper as a gesture of recognition for its valuable contribution. The act of referencing the relevant sources not only upholds academic honesty but also ensures proper acknowledgement of the authors' efforts. Your citation in your research significantly contributes to the continuous progress and evolution of the scholarly realm. Your endorsement and acknowledgement of our endeavours are genuinely appreciated.
@misc{abilbekov2024kazemotts,
title={KazEmoTTS: A Dataset for Kazakh Emotional Text-to-Speech Synthesis},
author={Adal Abilbekov and Saida Mussakhojayeva and Rustem Yeshpanov and Huseyin Atakan Varol},
year={2024},
eprint={2404.01033},
archivePrefix={arXiv},
primaryClass={eess.AS}
}