--- license: mit configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: images dtype: image - name: ord_labels dtype: class_label: names: '0': aquatic_mammals '1': fish '2': flowers '3': food_containers '4': fruit, vegetables and mushrooms '5': household electrical devices '6': household furniture '7': insects '8': large carnivores and bear '9': large man-made outdoor things '10': large natural outdoor scenes '11': large omnivores and herbivores '12': medium-sized mammals '13': non-insect invertebrates '14': people '15': reptiles '16': small mammals '17': trees '18': transportation vehicles '19': non-transportation vehicles - name: cl_labels sequence: class_label: names: '0': aquatic_mammals '1': fish '2': flowers '3': food_containers '4': fruit, vegetables and mushrooms '5': household electrical devices '6': household furniture '7': insects '8': large carnivores and bear '9': large man-made outdoor things '10': large natural outdoor scenes '11': large omnivores and herbivores '12': medium-sized mammals '13': non-insect invertebrates '14': people '15': reptiles '16': small mammals '17': trees '18': transportation vehicles '19': non-transportation vehicles splits: - name: train num_bytes: 113545106.0 num_examples: 50000 download_size: 116336858 dataset_size: 113545106.0 --- ## Dataset Card for CLCIFAR20 This Complementary labeled CIFAR100 dataset contains 3 human annotated complementary labels for all 50000 images in the training split of CIFAR100. We group 4-6 categories as a superclass and collect the complementary labels of these 20 superclasses. The workers are from [Amazon Mechanical Turk]([https://www.mturk.com](https://www.mturk.com/)). We randomly sampled 4 different labels for 3 different annotators, so each image would have 3 (probably repeated) complementary labels. For more details, please visit our [github](https://github.com/ntucllab/CLImage_Dataset) or [paper](https://arxiv.org/abs/2305.08295). ### Dataset Structure #### Data Instances A sample from the training set is provided below: ``` { 'images': , 'ord_labels': 16, 'cl_labels': [4, 17, 17] } ``` #### Data Fields - `images`: A `PIL.Image.Image` object containing the 32x32 image. - `ord_labels`: The ordinary labels of the images, and they are labeled from 0 to 19 as follows: 0: aquatic_mammals 1: fish 2: flowers 3: food_containers 4: fruit, vegetables and mushrooms 5: household electrical devices 6: household furniture 7: insects 8: large carnivores and bear 9: large man-made outdoor things 10: large natural outdoor scenes 11: large omnivores and herbivores 12: medium-sized mammals 13: non-insect invertebrates 14: people 15: reptiles 16: small mammals 17: trees 18: transportation vehicles 19: non-transportation vehicles - `cl_labels`: Three complementary labels for each image from three different workers. ## Annotation Task Design and Deployment on Amazon MTurk To collect human-annotated labels, we used Amazon Mechanical Turk (MTurk) to deploy our annotation task. The layout and interface design for the MTurk task can be found in the file `design-layout-mturk.html`. In each task, a single image was enlarged to 200 x 200 for clarity and presented alongside the question: `Choose any one "incorrect" label for this image`? Annotators were given four example labels to choose from (e.g., `dog, cat, ship, bird`), and were instructed to select the one that does not correctly describe the image. ## Citing If you find this dataset useful, please cite the following: ``` @article{ wang2024climage, title={{CLI}mage: Human-Annotated Datasets for Complementary-Label Learning}, author={Hsiu-Hsuan Wang and Mai Tan Ha and Nai-Xuan Ye and Wei-I Lin and Hsuan-Tien Lin}, journal={Transactions on Machine Learning Research}, issn={2835-8856}, year={2025} } ```