Datasets:
dataset_info:
features:
- name: english_text
dtype: string
- name: english_audio
dtype: audio
- name: naija_text
dtype: string
- name: naija_audio
dtype: audio
- name: speaker
dtype: string
splits:
- name: igbo
num_bytes: 77329160
num_examples: 500
- name: yoruba
num_bytes: 107895468
num_examples: 500
- name: hausa
num_bytes: 238658365
num_examples: 500
download_size: 423205037
dataset_size: 423882993
configs:
- config_name: default
data_files:
- split: igbo
path: data/igbo-*
- split: yoruba
path: data/yoruba-*
- split: hausa
path: data/hausa-*
license: apache-2.0
task_categories:
- automatic-speech-recognition
- text-to-speech
- translation
- text-classification
language:
- en
- ig
- yo
- ha
multilinguality: multilingual
language_creators:
- AfroVoices
tags:
- audio
- text
- speech-translation
- text-translation
- machine-translation
- automatic-speech-recognition
- low-resource
- derived-from-fleurs
- afrovoices
- igbo
- yoruba
- hausa
pretty_name: Hypa_Fleurs
size_categories:
- 1K<n<10K
Hypa_Fleurs
Hypa_Fleurs is an open-source multilingual, multi-modal dataset with a long term vision of advancing speech and language technology for low-resource African languages by leveraging the English split of the Google Fleurs dataset to create parallel speech and text datasets for a wide range of low-resource African languages. In this initial release, professional AfroVoices experts translated the original English texts into three under-resourced African languages: Igbo (ig), Yoruba (yo), and Hausa (ha).
In addition to the text-to-text translations, the dataset includes parallel speech recordings where the experts read the corresponding English and local language texts. This dataset provides:
- Text-to-Text Translations: English sentences paired with their translations in Igbo, Yoruba, and Hausa.
- Speech-to-Speech Recordings: Audio recordings of native speakers reading both the English texts and the corresponding translated texts.
This dual modality (text and audio) supports various downstream tasks such as machine translation, automatic speech recognition (ASR), text-to-speech (TTS), language identification (LI), and cross-lingual transfer learning.
Dataset Components
Text-to-Text Translations
- Source: Derived from the English split of the Google Fleurs dataset.
- Languages: English paired with translations in:
- Igbo
- Yoruba
- Hausa
- Format: Typically stored in CSV or JSON files where each record contains:
- The English sentence.
- The corresponding translations for each target language.
- Splits: The dataset is divided according to each low-resource language to mirror Fleurs partitioning but particularly for African Languages.
Speech-to-Speech Recordings
- Source: Audio recordings by AfroVoices experts.
- Languages: Parallel recordings for:
- English
- Igbo
- Yoruba
- Hausa
- Format: Audio files (e.g., WAV) with accompanying metadata files (e.g., CSV/JSON) that include:
- Unique identifier linking to text entries.
- Language code.
- Duration, sample rate, and other audio properties.
- Parallelism: Each audio file is aligned with the corresponding text in both the source (English) and target languages.
Data Structure
Data Instances
A typical data instance contains the source English text, the target language text (in Igbo, Yoruba, Hausa, etc...), the corresponding target language audio recording, and speaker name. Language name (or codes) are embeded in the splits.
{
"Split": "igbo",
"english_text": "A tornado is a spinning column of very low-pressure air, which sucks the surrounding air inward and upward.",
"naija_text": "Oke ifufe bα»₯ kα»lα»₯m na-atα»₯gharα» ikuku dα» obere, nke na-amα»pα»₯ta ikuku gbara ya gburugburu n'ime na elu.",
"source_audio_path": "[path/to/fleurs/en_us/audio/train/1234.wav]", // Optional or based on your structure
"english_audio": {
"path": "[hypaai/Hypa_Fleurs/english/data/0001_English.wav]", // Relative path within the dataset
"array": [...], // Decoded audio array (when loaded with datasets)
"sampling_rate": 16000
}
"naija_audio": {
"path": "[hypaai/Hypa_Fleurs/igbo/data/0001_Igbo.wav]", // Relative path within the dataset
"array": [...], // Decoded audio array (when loaded with datasets)
"sampling_rate": 16000
}
"Speaker": "Gift"
}
Data Fields
- english_text (string): The original English transcription derived from the FLEURS dataset.
- naija_text (string): The human-translated text in the specified target language.
- english_audio (datasets.Audio): An audio feature containing the recorded speech in the source language. When loaded, provides the path, decoded audio array, and sampling rate (16000 Hz).
- target_audio (datasets.Audio): An audio feature containing the recorded speech in the target language. When loaded, provides the path, decoded audio array, and sampling rate (16000 Hz).
- [Optional Fields]: Currently, the only additional/ optional field here is "Speaker" denoting the name of the speaker.
Below is a bird's eye view of the directory structure for this repository:
Hypa_Fleurs/
βββ README.md
βββ LICENSE
βββ data/
β βββ text/
β βββ audio/
β βββ english/
β βββ igbo/
β βββ yoruba/
β βββ hausa/
βββ metadata/
β βββ text_metadata.json
β βββ audio_metadata.json
βββ examples/
βββ load_dataset.py
Usage
Loading with Hugging Face Datasets
The dataset is available on Hugging Face and can be loaded using the datasets library. For example:
from datasets import load_dataset
# Load the text-to-text translation part
dataset = load_dataset("hypaai/Hypa_Fleurs", split="igbo")
print(dataset[0])
Data Preparation
- Source Data: We started with the English split of Google Fleurs.
- Translation: Professional AfroVoices experts translated the texts into Igbo, Yoruba, and Hausa.
- Recording: The same experts recorded high-quality audio for both the original English texts and the translations.
- Alignment: Each text entry is aligned with its corresponding audio recording, ensuring consistency across modalities.
- Preprocessing: All data were processed to ensure uniformity in encoding (UTF-8 for text, standardized audio formats) and split distribution across each language.
Applications
The Hypa_Fleurs dataset can be used for various research and development tasks, including but not limited to:
- Machine Translation: Training and evaluating translation models between English and African languages.
- Speech Recognition (ASR): Developing systems that can transcribe speech in under-resourced languages.
- Text-to-Speech (TTS): Creating natural-sounding TTS systems using paired audio-text data.
- Cross-lingual Learning: Supporting transfer learning and multilingual model training.
- Language Identification (LI): Identifying spoken or written languages (speech or text).
Licensing and Citation
This dataset is released under an Open Source License (apache-2.0). Please refer to the LICENSE file for full details.
When using Hypa_Fleurs in your work, please cite both this dataset and the original Google Fleurs dataset as follows:
@inproceedings{googlefleurs,
title={Google Fleurs: A Multilingual Speech Dataset},
author={Google AI},
booktitle={Conference on Speech and Language Processing},
year={2021}
}
@misc{hypafleurs,
title={Hypa_Fleurs: Multilingual Text and Speech Dataset for Low-Resource Languages},
author={AfroVoices},
note={Open-sourced on Hugging Face},
year={2025},
url={https://huggingface.co/datasets/hypaai/Hypa_Fleurs}
}
Acknowledgements
- Google Fleurs Team: For creating the foundational dataset.
- AfroVoices Experts: For their translation expertise and high-quality audio recordings.
- Community Contributions: We thank all contributors and users who help improve this dataset.
Contact and Contributions
For any questions, issues, or contributions, please open an issue in this repository or contact [email protected]. Contributions are welcome!
Closing Remarks
By making Hypa_Fleurs available, we hope to empower research and development in multilingual and speech technologies for African languages.
Hypa AI remains steadfast in its mission to pioneer intelligent solutions that are not just technologically advanced but are also culturally aware, ensuring that the future of AI is as diverse and inclusive as the world it serves.
AfroVoices, a subsidiary of Hypa AI, is dedicated to amplifying African voices, languages, and cultures in the intelligence age. Focused on bridging the digital representation gap, AfroVoices curates datasets and resources for African languages, promoting inclusivity and cultural appreciation in AI technologies. Their mission goes beyond technological innovation, aiming to celebrate the richness of African linguistic diversity on a global stage.