Datasets:
Tasks:
Audio Classification
Sub-tasks:
keyword-spotting
Languages:
English
Size:
100K<n<1M
ArXiv:
License:
| annotations_creators: | |
| - other | |
| language_creators: | |
| - crowdsourced | |
| language: | |
| - en | |
| license: | |
| - cc-by-4.0 | |
| multilinguality: | |
| - monolingual | |
| size_categories: | |
| - 100K<n<1M | |
| - 10K<n<100K | |
| source_datasets: | |
| - original | |
| task_categories: | |
| - audio-classification | |
| task_ids: | |
| - keyword-spotting | |
| pretty_name: SpeechCommands | |
| dataset_info: | |
| - config_name: v0.01 | |
| features: | |
| - name: file | |
| dtype: string | |
| - name: audio | |
| dtype: | |
| audio: | |
| sampling_rate: 16000 | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': 'yes' | |
| '1': 'no' | |
| '2': up | |
| '3': down | |
| '4': left | |
| '5': right | |
| '6': 'on' | |
| '7': 'off' | |
| '8': stop | |
| '9': go | |
| '10': zero | |
| '11': one | |
| '12': two | |
| '13': three | |
| '14': four | |
| '15': five | |
| '16': six | |
| '17': seven | |
| '18': eight | |
| '19': nine | |
| '20': bed | |
| '21': bird | |
| '22': cat | |
| '23': dog | |
| '24': happy | |
| '25': house | |
| '26': marvin | |
| '27': sheila | |
| '28': tree | |
| '29': wow | |
| '30': _silence_ | |
| - name: is_unknown | |
| dtype: bool | |
| - name: speaker_id | |
| dtype: string | |
| - name: utterance_id | |
| dtype: int8 | |
| splits: | |
| - name: train | |
| num_bytes: 1626283624 | |
| num_examples: 51093 | |
| - name: validation | |
| num_bytes: 217204539 | |
| num_examples: 6799 | |
| - name: test | |
| num_bytes: 98979965 | |
| num_examples: 3081 | |
| download_size: 1454702755 | |
| dataset_size: 1942468128 | |
| - config_name: v0.02 | |
| features: | |
| - name: file | |
| dtype: string | |
| - name: audio | |
| dtype: | |
| audio: | |
| sampling_rate: 16000 | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': 'yes' | |
| '1': 'no' | |
| '2': up | |
| '3': down | |
| '4': left | |
| '5': right | |
| '6': 'on' | |
| '7': 'off' | |
| '8': stop | |
| '9': go | |
| '10': zero | |
| '11': one | |
| '12': two | |
| '13': three | |
| '14': four | |
| '15': five | |
| '16': six | |
| '17': seven | |
| '18': eight | |
| '19': nine | |
| '20': bed | |
| '21': bird | |
| '22': cat | |
| '23': dog | |
| '24': happy | |
| '25': house | |
| '26': marvin | |
| '27': sheila | |
| '28': tree | |
| '29': wow | |
| '30': backward | |
| '31': forward | |
| '32': follow | |
| '33': learn | |
| '34': visual | |
| '35': _silence_ | |
| - name: is_unknown | |
| dtype: bool | |
| - name: speaker_id | |
| dtype: string | |
| - name: utterance_id | |
| dtype: int8 | |
| splits: | |
| - name: train | |
| num_bytes: 2684381672 | |
| num_examples: 84848 | |
| - name: validation | |
| num_bytes: 316435178 | |
| num_examples: 9982 | |
| - name: test | |
| num_bytes: 157096106 | |
| num_examples: 4890 | |
| download_size: 2285975869 | |
| dataset_size: 3157912956 | |
| config_names: | |
| - v0.01 | |
| - v0.02 | |
| # Dataset Card for SpeechCommands | |
| ## Table of Contents | |
| - [Table of Contents](#table-of-contents) | |
| - [Dataset Description](#dataset-description) | |
| - [Dataset Summary](#dataset-summary) | |
| - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) | |
| - [Languages](#languages) | |
| - [Dataset Structure](#dataset-structure) | |
| - [Data Instances](#data-instances) | |
| - [Data Fields](#data-fields) | |
| - [Data Splits](#data-splits) | |
| - [Dataset Creation](#dataset-creation) | |
| - [Curation Rationale](#curation-rationale) | |
| - [Source Data](#source-data) | |
| - [Annotations](#annotations) | |
| - [Personal and Sensitive Information](#personal-and-sensitive-information) | |
| - [Considerations for Using the Data](#considerations-for-using-the-data) | |
| - [Social Impact of Dataset](#social-impact-of-dataset) | |
| - [Discussion of Biases](#discussion-of-biases) | |
| - [Other Known Limitations](#other-known-limitations) | |
| - [Additional Information](#additional-information) | |
| - [Dataset Curators](#dataset-curators) | |
| - [Licensing Information](#licensing-information) | |
| - [Citation Information](#citation-information) | |
| - [Contributions](#contributions) | |
| ## Dataset Description | |
| - **Homepage:** https://www.tensorflow.org/datasets/catalog/speech_commands | |
| - **Repository:** [More Information Needed] | |
| - **Paper:** [Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition](https://arxiv.org/pdf/1804.03209.pdf) | |
| - **Leaderboard:** [More Information Needed] | |
| - **Point of Contact:** Pete Warden, [email protected] | |
| ### Dataset Summary | |
| This is a set of one-second .wav audio files, each containing a single spoken | |
| English word or background noise. These words are from a small set of commands, and are spoken by a | |
| variety of different speakers. This data set is designed to help train simple | |
| machine learning models. It is covered in more detail at [https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209). | |
| Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains | |
| 64,727 audio files. | |
| Version 0.02 of the data set (configuration `"v0.02"`) was released on April 11th 2018 and | |
| contains 105,829 audio files. | |
| ### Supported Tasks and Leaderboards | |
| * `keyword-spotting`: the dataset can be used to train and evaluate keyword | |
| spotting systems. The task is to detect preregistered keywords by classifying utterances | |
| into a predefined set of words. The task is usually performed on-device for the | |
| fast response time. Thus, accuracy, model size, and inference time are all crucial. | |
| ### Languages | |
| The language data in SpeechCommands is in English (BCP-47 `en`). | |
| ## Dataset Structure | |
| ### Data Instances | |
| Example of a core word (`"label"` is a word, `"is_unknown"` is `False`): | |
| ```python | |
| { | |
| "file": "no/7846fd85_nohash_0.wav", | |
| "audio": { | |
| "path": "no/7846fd85_nohash_0.wav", | |
| "array": array([ -0.00021362, -0.00027466, -0.00036621, ..., 0.00079346, | |
| 0.00091553, 0.00079346]), | |
| "sampling_rate": 16000 | |
| }, | |
| "label": 1, # "no" | |
| "is_unknown": False, | |
| "speaker_id": "7846fd85", | |
| "utterance_id": 0 | |
| } | |
| ``` | |
| Example of an auxiliary word (`"label"` is a word, `"is_unknown"` is `True`) | |
| ```python | |
| { | |
| "file": "tree/8b775397_nohash_0.wav", | |
| "audio": { | |
| "path": "tree/8b775397_nohash_0.wav", | |
| "array": array([ -0.00854492, -0.01339722, -0.02026367, ..., 0.00274658, | |
| 0.00335693, 0.0005188]), | |
| "sampling_rate": 16000 | |
| }, | |
| "label": 28, # "tree" | |
| "is_unknown": True, | |
| "speaker_id": "1b88bf70", | |
| "utterance_id": 0 | |
| } | |
| ``` | |
| Example of background noise (`_silence_`) class: | |
| ```python | |
| { | |
| "file": "_silence_/doing_the_dishes.wav", | |
| "audio": { | |
| "path": "_silence_/doing_the_dishes.wav", | |
| "array": array([ 0. , 0. , 0. , ..., -0.00592041, | |
| -0.00405884, -0.00253296]), | |
| "sampling_rate": 16000 | |
| }, | |
| "label": 30, # "_silence_" | |
| "is_unknown": False, | |
| "speaker_id": "None", | |
| "utterance_id": 0 # doesn't make sense here | |
| } | |
| ``` | |
| ### Data Fields | |
| * `file`: relative audio filename inside the original archive. | |
| * `audio`: dictionary containing a relative audio filename, | |
| a decoded audio array, and the sampling rate. Note that when accessing | |
| the audio column: `dataset[0]["audio"]` the audio is automatically decoded | |
| and resampled to `dataset.features["audio"].sampling_rate`. | |
| Decoding and resampling of a large number of audios might take a significant | |
| amount of time. Thus, it is important to first query the sample index before | |
| the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred | |
| over `dataset["audio"][0]`. | |
| * `label`: either word pronounced in an audio sample or background noise (`_silence_`) class. | |
| Note that it's an integer value corresponding to the class name. | |
| * `is_unknown`: if a word is auxiliary. Equals to `False` if a word is a core word or `_silence_`, | |
| `True` if a word is an auxiliary word. | |
| * `speaker_id`: unique id of a speaker. Equals to `None` if label is `_silence_`. | |
| * `utterance_id`: incremental id of a word utterance within the same speaker. | |
| ### Data Splits | |
| The dataset has two versions (= configurations): `"v0.01"` and `"v0.02"`. `"v0.02"` | |
| contains more words (see section [Source Data](#source-data) for more details). | |
| | | train | validation | test | | |
| |----- |------:|-----------:|-----:| | |
| | v0.01 | 51093 | 6799 | 3081 | | |
| | v0.02 | 84848 | 9982 | 4890 | | |
| Note that in train and validation sets examples of `_silence_` class are longer than 1 second. | |
| You can use the following code to sample 1-second examples from the longer ones: | |
| ```python | |
| def sample_noise(example): | |
| # Use this function to extract random 1 sec slices of each _silence_ utterance, | |
| # e.g. inside `torch.utils.data.Dataset.__getitem__()` | |
| from random import randint | |
| if example["label"] == "_silence_": | |
| random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1) | |
| example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]] | |
| return example | |
| ``` | |
| ## Dataset Creation | |
| ### Curation Rationale | |
| The primary goal of the dataset is to provide a way to build and test small | |
| models that can detect a single word from a set of target words and differentiate it | |
| from background noise or unrelated speech with as few false positives as possible. | |
| ### Source Data | |
| #### Initial Data Collection and Normalization | |
| The audio files were collected using crowdsourcing, see | |
| [aiyprojects.withgoogle.com/open_speech_recording](https://github.com/petewarden/extract_loudest_section) | |
| for some of the open source audio collection code that was used. The goal was to gather examples of | |
| people speaking single-word commands, rather than conversational sentences, so | |
| they were prompted for individual words over the course of a five minute | |
| session. | |
| In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left", | |
| "Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", | |
| "Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow". | |
| In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual". | |
| In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left", | |
| "Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation | |
| it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words | |
| from unrecognized ones. | |
| The `_silence_` label contains a set of longer audio clips that are either recordings or | |
| a mathematical simulation of noise. | |
| #### Who are the source language producers? | |
| The audio files were collected using crowdsourcing. | |
| ### Annotations | |
| #### Annotation process | |
| Labels are the list of words prepared in advances. | |
| Speakers were prompted for individual words over the course of a five minute | |
| session. | |
| #### Who are the annotators? | |
| [More Information Needed] | |
| ### Personal and Sensitive Information | |
| The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. | |
| ## Considerations for Using the Data | |
| ### Social Impact of Dataset | |
| [More Information Needed] | |
| ### Discussion of Biases | |
| [More Information Needed] | |
| ### Other Known Limitations | |
| [More Information Needed] | |
| ## Additional Information | |
| ### Dataset Curators | |
| [More Information Needed] | |
| ### Licensing Information | |
| Creative Commons BY 4.0 License ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/legalcode]). | |
| ### Citation Information | |
| ``` | |
| @article{speechcommandsv2, | |
| author = { {Warden}, P.}, | |
| title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}", | |
| journal = {ArXiv e-prints}, | |
| archivePrefix = "arXiv", | |
| eprint = {1804.03209}, | |
| primaryClass = "cs.CL", | |
| keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction}, | |
| year = 2018, | |
| month = apr, | |
| url = {https://arxiv.org/abs/1804.03209}, | |
| } | |
| ``` | |
| ### Contributions | |
| Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset. |