diarray's picture
Update Dataset Card: release 1.1.0
0a56adc
metadata
language:
  - bm
pretty_name: Transcription Scorer
version: 1.1.0
tags:
  - audio
  - speech
  - evaluation
  - human-feedback
  - ASR
  - reward-model
  - Bambara
license: cc-by-sa-4.0
task_categories:
  - automatic-speech-recognition
  - reinforcement-learning
  - audio-classification
annotations_creators:
  - expert-annotated
language_creators:
  - found
size_categories:
  - 1K<n<10K
dataset_info:
  - config_name: default
    audio_format: arrow
    features:
      - name: audio
        dtype: audio
      - name: duration
        dtype: float
      - name: text
        dtype: string
      - name: score
        dtype: float
    total_audio_files: 2153
    total_duration_hours: ~2
  - config_name: partially-reviewed
    features:
      - name: audio
        dtype: audio
      - name: duration
        dtype: float64
      - name: text
        dtype: string
      - name: score
        dtype: float64
    splits:
      - name: train
        num_bytes: 600583588
        num_examples: 1000
      - name: test
        num_bytes: 116626924
        num_examples: 200
    download_size: 695513651
    dataset_size: 717210512
configs:
  - config_name: partially-reviewed
    data_files:
      - split: train
        path: partially-reviewed/train-*
      - split: test
        path: partially-reviewed/test-*

Transcription Scorer Dataset

The Transcription Scorer dataset was created to support research in reference-free evaluation of Automatic Speech Recognition (ASR) systems using human feedback. Unlike traditional evaluation metrics such as WER and its derivatives, this dataset reflects judgments of ASR outputs by human raters across multiple criteria, simulating the way a teacher grades students.

⚙️ What’s Inside

This dataset contains 1200 audio samples (from diverse sources including music with lyrics) totaling 2.28 hours. It is made of short to meduim length segments each associated with:

  • One transcriptions (drawn by selecting the best hypothesis of two Bambara ASR models)
  • A score between 0 and 100 assigned by human annotators
bucket (s) partially‑reviewed
0.6 – 15 965
15 – 30 235

Sources:

  • Transcriptions were generated by two ASR models:
    • Djelia-V1 (proprietary, access through API)
    • Soloni (open-source from RobotsMali)
  • Additional 81 transcriptions were intentionally randomized/shuffled to measure baseline judgment.

Most of the audios were collected by RobotsMali AI4D Lab with the Office de Radio et Télévision du Mali which gave us early access to a few archives of some of their past emissions in Bamanankan. But this dataset also include a few samples from bam-asr-early.

The evaluation was based on the following criteria but we also left room for a personnal subjective judgement so it also include some form of human preference feedback as the annotations were partially reviwed by professional Bambara linguists. So it is a Human feedback dataset but not based on preferences only, the score is actually designed to be a refective of the quality of the transcriptions enough to be used as a proxy metric.

Usage

This dataset is intended for researchers and developers who face a label scarcity situation making traditional ASR evaluation metrics like WER impossible (which is especially relevent to low resource languges such as Bambara). By leveraging human-assigned scores, it enables the development of scoring models which outputs can be used as a proxy to transcription quality. Whether you're building evaluation tools or studying human feedback in speech systems, you might find this dataset useful if you face label scarcity.

  • Developing reference-free evaluation metrics
  • Training reward models for RLHF-based fine-tuning of ASR systems
  • Understanding how human preferences relate to transcription quality
from datasets import load_dataset

# Load the dataset into Hugging Face Dataset object
dataset = load_dataset("RobotsMali/transcription-scorer", "partially-reviewed")

Data Splits

  • Train: 1000 examples (~1.92h)
  • Test: 200 examples (~0.37h)

This initial version is only partially reviewed, so you may contribute by opening a PR or a discussion if you find that some assigned scores are innacurate.

Fields

  • audio: raw audio
  • duration: audio length (seconds)
  • transcription: text output to be scored
  • score: human-assigned score (0–100)

Known Limitations / Issues

  • Human scoring may contain inconsistencies.
  • Only partial review/consensus exists — scores may be refined in future versions.
  • The dataset is very limited in context diversity and transcription variance, only two models were used to generate transcriptions for the same ~560 audios + 80 shuffled transcriptions for baseline estimation so it will benefit from additional data from different distribution.

🤝 Contribute

Feel something was misjudged? Want to improve score consistency? Add transcriptions from another model ? Please open a discussion — we welcome feedback and collaboration.


📜 Citation

@misc{transcription_scorer_2025,
  title={A Dataset of human evaluations of Automatic Speech Recognition for low Resource Bambara language},
  author={RobotsMali AI4D Lab},
  year={2025},
  publisher={Hugging Face}
}