Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    TypeError
Message:      'list' object is not a mapping
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 605, in get_module
                  dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/info.py", line 386, in from_dataset_card_data
                  dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
                  yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/features/features.py", line 2031, in _from_yaml_list
                  return cls.from_dict(from_yaml_inner(yaml_data))
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/features/features.py", line 2027, in from_yaml_inner
                  return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
                                ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/features/features.py", line 2024, in from_yaml_inner
                  return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
              TypeError: 'list' object is not a mapping

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

POLAR: Posture-Level Action Recognition Dataset

Disclaimer

This dataset is a restructured and YOLO-formatted version of the original POsture-Level Action Recognition (POLAR) dataset. I do not claim ownership or licensing rights over this dataset. For full details, including original licensing and usage terms, please refer to the original dataset on Mendeley Data.

Motivation

The original POLAR dataset, while comprehensive, has a somewhat complex structure that can make it challenging to navigate and integrate with modern object detection frameworks like YOLO. To address this, I reorganized the dataset into a clean, split-based format and converted the annotations to YOLO-compatible labels. This makes it easier to use for training action recognition models directly.

Description

The POLAR (POsture-Level Action Recognition) dataset focuses on nine categories of human actions directly tied to posture: bending, jumping, lying, running, sitting, squatting, standing, stretching, and walking. It contains a total of 35,324 images and covers approximately 99% of posture-level human actions in daily life, based on the authors' analysis of the PASCAL VOC dataset.

This dataset is suitable for tasks such as:

  • Image Classification
  • Action Recognition
  • Object Detection (with YOLO-formatted bounding boxes around persons)

Each image features a single or multiple persons with bounding box annotations labeled by their primary action/pose.

Dataset Structure

The dataset is pre-split into train, val, and test sets. The directory structure is as follows:

POLAR/
β”œβ”€β”€ Annotations/          # Original JSON annotation files (for reference)
β”‚   β”œβ”€β”€ test/
β”‚   β”œβ”€β”€ train/
β”‚   └── val/
β”œβ”€β”€ images/               # Original images (.jpg)
β”‚   β”œβ”€β”€ test/
β”‚   β”œβ”€β”€ train/
β”‚   └── val/
β”œβ”€β”€ labels/               # YOLO-formatted .txt label files
β”‚   β”œβ”€β”€ test/
β”‚   β”œβ”€β”€ train/
β”‚   └── val/
β”œβ”€β”€ splits/               # Split definition files
β”‚   β”œβ”€β”€ test.txt
β”‚   β”œβ”€β”€ train.txt
β”‚   └── val.txt
└── dataset.yaml          # YOLO configuration file (for training)
  • splits/: Text files listing image filenames (one per line, without extensions) for each split.
  • labels/: For each image (e.g., images/train/p1_00001.jpg), there is a corresponding labels/train/p1_00001.txt with YOLO-format annotations (class ID + normalized bounding box coordinates).
  • dataset.yaml: Pre-configured for Ultralytics YOLO training (see YOLO Dataset Format for details).

Changes Made

Compared to the original dataset, the following modifications were applied:

  1. Restructured Splits:

    • Organized images and annotations into explicit train, val, and test subfolders.
    • Used the original split definitions from the provided .txt files in splits/ to ensure consistency.
  2. YOLO Formatting:

    • Converted JSON annotations to YOLO .txt files in the labels/ folder.
    • Each line in a .txt file follows the format: <class_id> <center_x> <center_y> <norm_width> <norm_height> (normalized to [0,1]).
    • Class IDs map to actions as follows (0-8):
      • 0: bending
      • 1: jumping
      • 2: lying
      • 3: running
      • 4: sitting
      • 5: squatting
      • 6: standing
      • 7: stretching
      • 8: walking
    • Included a ready-to-use dataset.yaml for YOLOv8+ training.

These changes simplify setup while preserving the original data integrity.

Usage

Training with YOLO (Ultralytics)

  1. Clone or download this dataset to your working directory.
  2. Install Ultralytics: pip install ultralytics.
  3. Train a model (e.g., using YOLOv8 nano):
    yolo detect train data=dataset.yaml model=yolov8n.pt epochs=100 imgsz=640
    
    • This assumes the YAML is in the root (POLAR/).
    • Adjust epochs, imgsz, or other hyperparameters as needed.
    • YOLO will automatically pair images with labels based on filenames.

For more details on YOLO integration, see the Ultralytics documentation.

Citation

If you use this dataset in your research, please cite the original work:

Ma, Wentao; Liang, Shuang (2021), β€œPOLAR: Posture-level Action Recognition Dataset”, Mendeley Data, V1, doi: 10.17632/hvnsh7rwz7.1.


Last updated: October 20, 2025

Downloads last month
32