diarray commited on
Commit
0a56adc
·
1 Parent(s): 88e94b8

Update Dataset Card: release 1.1.0

Browse files
Files changed (1) hide show
  1. README.md +30 -31
README.md CHANGED
@@ -2,7 +2,7 @@
2
  language:
3
  - bm
4
  pretty_name: Transcription Scorer
5
- version: 1.0.0
6
  tags:
7
  - audio
8
  - speech
@@ -11,7 +11,7 @@ tags:
11
  - ASR
12
  - reward-model
13
  - Bambara
14
- license: cc-by-4.0
15
  task_categories:
16
  - automatic-speech-recognition
17
  - reinforcement-learning
@@ -30,14 +30,12 @@ dataset_info:
30
  dtype: audio
31
  - name: duration
32
  dtype: float
33
- - name: transcription
34
  dtype: string
35
  - name: score
36
  dtype: float
37
- - name: labeler
38
- dtype: string
39
  total_audio_files: 2153
40
- total_duration_hours: ~9
41
  - config_name: partially-reviewed
42
  features:
43
  - name: audio
@@ -50,13 +48,13 @@ dataset_info:
50
  dtype: float64
51
  splits:
52
  - name: train
53
- num_bytes: 600583588.0
54
  num_examples: 1000
55
  - name: test
56
- num_bytes: 116626924.0
57
  num_examples: 200
58
  download_size: 695513651
59
- dataset_size: 717210512.0
60
  configs:
61
  - config_name: partially-reviewed
62
  data_files:
@@ -68,36 +66,38 @@ configs:
68
 
69
  # Transcription Scorer Dataset
70
 
71
- The **Transcription Scorer** dataset was created to support research in reference-free evaluation of Automatic Speech Recognition (ASR) systems using **human feedback**. Unlike traditional evaluation metrics such as WER and its derivatives, this dataset reflects subjective judgments of ASR outputs by human raters across multiple criteria, simulating the way a teacher grades students.
72
 
73
  ## ⚙️ What’s Inside
74
 
75
- This dataset contains **2153 audio samples** (from diverse sources including speech and music with lyrics), each associated with:
76
 
77
- - One **transcriptions** (By selecting the best hypothesis of different models)
78
  - A **score** between 0 and 100 assigned by human annotators
79
- - The **labeler** identity (What generated the transcription. That wasn't included during scoring)
 
 
 
 
80
 
81
  ### Sources:
82
  - Transcriptions were generated by two ASR models:
83
- - **Djelia-V1** (proprietary, API-based)
84
  - **Soloni** (open-source from [RobotsMali](https://huggingface.co/RobotsMali/soloni-114m-tdt-ctc-V0))
85
- - Additional 151 transcriptions were human-generated
86
- - 100 transcriptions were intentionally **randomized/shuffled** to measure baseline judgment.
87
 
88
- Most of the audios were collected by RobotsMali AI4D Lab with the [Office de Radio et Télévision du Mali](https://www.ortm.ml/) which gave us early access to a few archives of some of their past emissions in Bamanankan. But this dataset also include a few samples from [Jeli-ASR](https://huggingface.co/datasets/RobotsMali/jeli-asr) and some in house data that had not been published up to now (the 100 human made transcriptions)
89
 
90
- The evaluation was based on this criteria document: [Scoring Guide](https://docs.google.com/document/d/e/2PACX-1vRHFEAwU4C43NUHEY85auokgiG9dJgB0ApKwY41fwFGYn7xUSl1hXnk-CBp0_67c1C7mC7jXLzte3Mu/pub), but annotators had freedom to assign scores as they would in a real-world grading scenario. So it is a Human feedback dataset but not based on preferences only, the score is actually designed to be a valuable quality metric.
91
 
92
  ## **Usage**
93
 
94
- This dataset is intended for researchers and developers who face a label scarcity situation making traditional ASR evaluation metrics like WER impossible (which is especially relevent to low resource languges such as Bambara). By leveraging human-assigned scores, it enables the development of models which outputs can be used as a proxy to transcription quality. Whether you're building evaluation tools or studying human feedback in speech systems, this dataset provides a rich ground for experimentation.
95
 
96
  - Developing **reference-free** evaluation metrics
97
- - Training **reward models** for RLHF-based fine-tuning of ASR systems
98
  - Understanding how **human preferences** relate to transcription quality
99
 
100
- The data is in .arrow format for compatibility with HF's Datasets Library. So you don't need any ajustement to load the dataset directly with datasets:
101
 
102
  ```python
103
  from datasets import load_dataset
@@ -109,10 +109,10 @@ dataset = load_dataset("RobotsMali/transcription-scorer", "partially-reviewed")
109
 
110
  ## Data Splits
111
 
112
- - **Train**: 1937 examples (~8.5h)
113
- - **Test**: 216 examples (~0.6h)
114
 
115
- This initial version is only **partially reviewed**, hence the config name. We're actively seeking collaborators to help review the scores.
116
 
117
  ## Fields
118
 
@@ -120,17 +120,16 @@ This initial version is only **partially reviewed**, hence the config name. We'r
120
  - `duration`: audio length (seconds)
121
  - `transcription`: text output to be scored
122
  - `score`: human-assigned score (0–100)
123
- - `labeler`: identifier for annotator
124
 
125
- ## Known Limitations
126
 
127
- - Human scoring may contain inconsistencies (quite natural in human eval).
128
  - Only partial review/consensus exists — **scores may be refined** in future versions.
129
- - Not all samples were evaluated by all annotators (Actually every sample had a unique annotator. That might change in future versions).
130
 
131
  ## 🤝 Contribute
132
 
133
- Feel something was misjudged? Want to improve score consistency? Please open a discussion — we **welcome feedback and collaboration**.
134
 
135
  ---
136
 
@@ -139,10 +138,10 @@ Feel something was misjudged? Want to improve score consistency? Please open a d
139
  ```bibtex
140
  @misc{transcription_scorer_2025,
141
  title={A Dataset of human evaluations of Automatic Speech Recognition for low Resource Bambara language},
142
- author={Yacouba Diarra and Panga Azazia Kamaté and Adam Bouno Kampo and Nouhoum Coulibaly},
143
  year={2025},
144
  publisher={Hugging Face}
145
  }
146
  ```
147
 
148
- ---
 
2
  language:
3
  - bm
4
  pretty_name: Transcription Scorer
5
+ version: 1.1.0
6
  tags:
7
  - audio
8
  - speech
 
11
  - ASR
12
  - reward-model
13
  - Bambara
14
+ license: cc-by-sa-4.0
15
  task_categories:
16
  - automatic-speech-recognition
17
  - reinforcement-learning
 
30
  dtype: audio
31
  - name: duration
32
  dtype: float
33
+ - name: text
34
  dtype: string
35
  - name: score
36
  dtype: float
 
 
37
  total_audio_files: 2153
38
+ total_duration_hours: ~2
39
  - config_name: partially-reviewed
40
  features:
41
  - name: audio
 
48
  dtype: float64
49
  splits:
50
  - name: train
51
+ num_bytes: 600583588
52
  num_examples: 1000
53
  - name: test
54
+ num_bytes: 116626924
55
  num_examples: 200
56
  download_size: 695513651
57
+ dataset_size: 717210512
58
  configs:
59
  - config_name: partially-reviewed
60
  data_files:
 
66
 
67
  # Transcription Scorer Dataset
68
 
69
+ The **Transcription Scorer** dataset was created to support research in reference-free evaluation of Automatic Speech Recognition (ASR) systems using **human feedback**. Unlike traditional evaluation metrics such as WER and its derivatives, this dataset reflects judgments of ASR outputs by human raters across multiple criteria, simulating the way a teacher grades students.
70
 
71
  ## ⚙️ What’s Inside
72
 
73
+ This dataset contains **1200 audio samples** (from diverse sources including music with lyrics) totaling 2.28 hours. It is made of short to meduim length segments each associated with:
74
 
75
+ - One **transcriptions** (drawn by selecting the best hypothesis of two Bambara ASR models)
76
  - A **score** between 0 and 100 assigned by human annotators
77
+
78
+ | bucket (s) | partially‑reviewed |
79
+ | ---------- | -------------- |
80
+ | 0.6 – 15 | 965 |
81
+ | 15 – 30 | 235 |
82
 
83
  ### Sources:
84
  - Transcriptions were generated by two ASR models:
85
+ - **Djelia-V1** (proprietary, access through API)
86
  - **Soloni** (open-source from [RobotsMali](https://huggingface.co/RobotsMali/soloni-114m-tdt-ctc-V0))
87
+ - Additional 81 transcriptions were intentionally **randomized/shuffled** to measure baseline judgment.
 
88
 
89
+ Most of the audios were collected by RobotsMali AI4D Lab with the [Office de Radio et Télévision du Mali](https://www.ortm.ml/) which gave us early access to a few archives of some of their past emissions in Bamanankan. But this dataset also include a few samples from [bam-asr-early](https://huggingface.co/datasets/RobotsMali/bam-asr-early).
90
 
91
+ The evaluation was based on the [following criteria](https://docs.google.com/document/d/e/2PACX-1vRHFEAwU4C43NUHEY85auokgiG9dJgB0ApKwY41fwFGYn7xUSl1hXnk-CBp0_67c1C7mC7jXLzte3Mu/pub) but we also left room for a personnal subjective judgement so it also include some form of human preference feedback as the annotations were partially reviwed by professional Bambara linguists. So it is a Human feedback dataset but not based on preferences only, the score is actually designed to be a refective of the quality of the transcriptions enough to be used as a proxy metric.
92
 
93
  ## **Usage**
94
 
95
+ This dataset is intended for researchers and developers who face a label scarcity situation making traditional ASR evaluation metrics like WER impossible (which is especially relevent to low resource languges such as Bambara). By leveraging human-assigned scores, it enables the development of scoring models which outputs can be used as a proxy to transcription quality. Whether you're building evaluation tools or studying human feedback in speech systems, you might find this dataset useful if you face label scarcity.
96
 
97
  - Developing **reference-free** evaluation metrics
98
+ - Training **reward models** for RLHF-based fine-tuning of ASR systems
99
  - Understanding how **human preferences** relate to transcription quality
100
 
 
101
 
102
  ```python
103
  from datasets import load_dataset
 
109
 
110
  ## Data Splits
111
 
112
+ - **Train**: 1000 examples (~1.92h)
113
+ - **Test**: 200 examples (~0.37h)
114
 
115
+ This initial version is only **partially reviewed**, so you may contribute by opening a PR or a discussion if you find that some assigned scores are innacurate.
116
 
117
  ## Fields
118
 
 
120
  - `duration`: audio length (seconds)
121
  - `transcription`: text output to be scored
122
  - `score`: human-assigned score (0–100)
 
123
 
124
+ ## Known Limitations / Issues
125
 
126
+ - Human scoring may contain inconsistencies.
127
  - Only partial review/consensus exists — **scores may be refined** in future versions.
128
+ - The dataset is very limited in context diversity and transcription variance, only two models were used to generate transcriptions for the same ~560 audios + 80 shuffled transcriptions for baseline estimation so it will benefit from additional data from different distribution.
129
 
130
  ## 🤝 Contribute
131
 
132
+ Feel something was misjudged? Want to improve score consistency? Add transcriptions from another model ? Please open a discussion — we **welcome feedback and collaboration**.
133
 
134
  ---
135
 
 
138
  ```bibtex
139
  @misc{transcription_scorer_2025,
140
  title={A Dataset of human evaluations of Automatic Speech Recognition for low Resource Bambara language},
141
+ author={RobotsMali AI4D Lab},
142
  year={2025},
143
  publisher={Hugging Face}
144
  }
145
  ```
146
 
147
+ ---