The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: TypeError
Message: 'list' object is not a mapping
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/features/features.py", line 2031, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/features/features.py", line 2027, in from_yaml_inner
return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/features/features.py", line 2024, in from_yaml_inner
return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'list' object is not a mappingNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
POLAR: Posture-Level Action Recognition Dataset
Disclaimer
This dataset is a restructured and YOLO-formatted version of the original POsture-Level Action Recognition (POLAR) dataset. I do not claim ownership or licensing rights over this dataset. For full details, including original licensing and usage terms, please refer to the original dataset on Mendeley Data.
Motivation
The original POLAR dataset, while comprehensive, has a somewhat complex structure that can make it challenging to navigate and integrate with modern object detection frameworks like YOLO. To address this, I reorganized the dataset into a clean, split-based format and converted the annotations to YOLO-compatible labels. This makes it easier to use for training action recognition models directly.
Description
The POLAR (POsture-Level Action Recognition) dataset focuses on nine categories of human actions directly tied to posture: bending, jumping, lying, running, sitting, squatting, standing, stretching, and walking. It contains a total of 35,324 images and covers approximately 99% of posture-level human actions in daily life, based on the authors' analysis of the PASCAL VOC dataset.
This dataset is suitable for tasks such as:
- Image Classification
- Action Recognition
- Object Detection (with YOLO-formatted bounding boxes around persons)
Each image features a single or multiple persons with bounding box annotations labeled by their primary action/pose.
Dataset Structure
The dataset is pre-split into train, val, and test sets. The directory structure is as follows:
POLAR/
βββ Annotations/ # Original JSON annotation files (for reference)
β βββ test/
β βββ train/
β βββ val/
βββ images/ # Original images (.jpg)
β βββ test/
β βββ train/
β βββ val/
βββ labels/ # YOLO-formatted .txt label files
β βββ test/
β βββ train/
β βββ val/
βββ splits/ # Split definition files
β βββ test.txt
β βββ train.txt
β βββ val.txt
βββ dataset.yaml # YOLO configuration file (for training)
- splits/: Text files listing image filenames (one per line, without extensions) for each split.
- labels/: For each image (e.g.,
images/train/p1_00001.jpg), there is a correspondinglabels/train/p1_00001.txtwith YOLO-format annotations (class ID + normalized bounding box coordinates). - dataset.yaml: Pre-configured for Ultralytics YOLO training (see YOLO Dataset Format for details).
Changes Made
Compared to the original dataset, the following modifications were applied:
Restructured Splits:
- Organized images and annotations into explicit train, val, and test subfolders.
- Used the original split definitions from the provided
.txtfiles insplits/to ensure consistency.
YOLO Formatting:
- Converted JSON annotations to YOLO
.txtfiles in thelabels/folder. - Each line in a
.txtfile follows the format:<class_id> <center_x> <center_y> <norm_width> <norm_height>(normalized to [0,1]). - Class IDs map to actions as follows (0-8):
- 0: bending
- 1: jumping
- 2: lying
- 3: running
- 4: sitting
- 5: squatting
- 6: standing
- 7: stretching
- 8: walking
- Included a ready-to-use
dataset.yamlfor YOLOv8+ training.
- Converted JSON annotations to YOLO
These changes simplify setup while preserving the original data integrity.
Usage
Training with YOLO (Ultralytics)
- Clone or download this dataset to your working directory.
- Install Ultralytics:
pip install ultralytics. - Train a model (e.g., using YOLOv8 nano):
yolo detect train data=dataset.yaml model=yolov8n.pt epochs=100 imgsz=640- This assumes the YAML is in the root (
POLAR/). - Adjust
epochs,imgsz, or other hyperparameters as needed. - YOLO will automatically pair images with labels based on filenames.
- This assumes the YAML is in the root (
For more details on YOLO integration, see the Ultralytics documentation.
Citation
If you use this dataset in your research, please cite the original work:
Ma, Wentao; Liang, Shuang (2021), βPOLAR: Posture-level Action Recognition Datasetβ, Mendeley Data, V1, doi: 10.17632/hvnsh7rwz7.1.
Last updated: October 20, 2025
- Downloads last month
- 32