whyen-wang commited on
Commit
7c9a8ef
·
1 Parent(s): 7b60134
.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ annotations/
2
+ annotations_trainval2017.zip
3
+ *.jsonl
README.md CHANGED
@@ -1,3 +1,193 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ size_categories:
4
+ - n<1K
5
+ task_categories:
6
+ - image-to-text
7
+ language:
8
+ - en
9
+ pretty_name: COCO Captions
10
  ---
11
+
12
+ # Dataset Card for "COCO 2017"
13
+
14
+ ## Quick Start
15
+ ### Usage
16
+ ```python
17
+ >>> from datasets.load import load_dataset
18
+
19
+ >>> dataset = load_dataset('whyen-wang/coco_captions')
20
+ >>> example = dataset['train'][500]
21
+ >>> print(example)
22
+ {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x426>,
23
+ 'captions': ['A plate that has food on top of it with powdered sugar.',
24
+ 'A breakfast item on a plate is sitting on a table.',
25
+ 'different kinds of food on a glass plate',
26
+ 'a bowl with some pancakes and toppings on it',
27
+ 'Pancakes on a plate with banana, sauce and whipped cream toppings']}
28
+ ```
29
+
30
+ ### Visualization
31
+ ```python
32
+ >>> import cv2
33
+ >>> import numpy as np
34
+ >>> from PIL import Image
35
+
36
+ >>> def visualize(example):
37
+ for caption in example['captions']:
38
+ print(caption)
39
+ image = np.array(example['image'])
40
+ return image
41
+
42
+ >>> Image.fromarray(example)
43
+ ```
44
+
45
+ ## Table of Contents
46
+ - [Table of Contents](#table-of-contents)
47
+ - [Dataset Description](#dataset-description)
48
+ - [Dataset Summary](#dataset-summary)
49
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
50
+ - [Languages](#languages)
51
+ - [Dataset Structure](#dataset-structure)
52
+ - [Data Instances](#data-instances)
53
+ - [Data Fields](#data-fields)
54
+ - [Data Splits](#data-splits)
55
+ - [Dataset Creation](#dataset-creation)
56
+ - [Curation Rationale](#curation-rationale)
57
+ - [Source Data](#source-data)
58
+ - [Annotations](#annotations)
59
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
60
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
61
+ - [Social Impact of Dataset](#social-impact-of-dataset)
62
+ - [Discussion of Biases](#discussion-of-biases)
63
+ - [Other Known Limitations](#other-known-limitations)
64
+ - [Additional Information](#additional-information)
65
+ - [Dataset Curators](#dataset-curators)
66
+ - [Licensing Information](#licensing-information)
67
+ - [Citation Information](#citation-information)
68
+ - [Contributions](#contributions)
69
+
70
+ ## Dataset Description
71
+
72
+ - **Homepage:** https://cocodataset.org/
73
+ - **Repository:** None
74
+ - **Paper:** [Microsoft COCO: Common Objects in Context](https://arxiv.org/abs/1405.0312)
75
+ - **Leaderboard:** [Papers with Code](https://paperswithcode.com/dataset/coco)
76
+ - **Point of Contact:** None
77
+
78
+ ### Dataset Summary
79
+
80
+ COCO is a large-scale object detection, segmentation, and captioning dataset.
81
+
82
+ ### Supported Tasks and Leaderboards
83
+
84
+ [Image to Text](https://huggingface.co/tasks/image-to-text)
85
+
86
+ ### Languages
87
+
88
+ en
89
+
90
+ ## Dataset Structure
91
+
92
+ ### Data Instances
93
+
94
+ An example looks as follows.
95
+
96
+ ```
97
+ {
98
+ "image": PIL.Image(mode="RGB"),
99
+ "captions": [
100
+ "Closeup of bins of food that include broccoli and bread.",
101
+ "A meal is presented in brightly colored plastic trays.",
102
+ "there are containers filled with different kinds of foods",
103
+ "Colorful dishes holding meat, vegetables, fruit, and bread.",
104
+ "A bunch of trays that have different food."
105
+ ]
106
+ }
107
+ ```
108
+
109
+ ### Data Fields
110
+
111
+ [More Information Needed]
112
+
113
+ ### Data Splits
114
+
115
+ | name | train | validation |
116
+ | ------- | ------: | ---------: |
117
+ | default | 118,287 | 5,000 |
118
+
119
+ ## Dataset Creation
120
+
121
+ ### Curation Rationale
122
+
123
+ [More Information Needed]
124
+
125
+ ### Source Data
126
+
127
+ #### Initial Data Collection and Normalization
128
+
129
+ [More Information Needed]
130
+
131
+ #### Who are the source language producers?
132
+
133
+ [More Information Needed]
134
+
135
+ ### Annotations
136
+
137
+ #### Annotation process
138
+
139
+ [More Information Needed]
140
+
141
+ #### Who are the annotators?
142
+
143
+ [More Information Needed]
144
+
145
+ ### Personal and Sensitive Information
146
+
147
+ [More Information Needed]
148
+
149
+ ## Considerations for Using the Data
150
+
151
+ ### Social Impact of Dataset
152
+
153
+ [More Information Needed]
154
+
155
+ ### Discussion of Biases
156
+
157
+ [More Information Needed]
158
+
159
+ ### Other Known Limitations
160
+
161
+ [More Information Needed]
162
+
163
+ ## Additional Information
164
+
165
+ ### Dataset Curators
166
+
167
+ [More Information Needed]
168
+
169
+ ### Licensing Information
170
+
171
+ Creative Commons Attribution 4.0 License
172
+
173
+ ### Citation Information
174
+
175
+ ```
176
+ @article{cocodataset,
177
+ author = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and Lubomir D. Bourdev and Ross B. Girshick and James Hays and Pietro Perona and Deva Ramanan and Piotr Doll{'{a} }r and C. Lawrence Zitnick},
178
+ title = {Microsoft {COCO:} Common Objects in Context},
179
+ journal = {CoRR},
180
+ volume = {abs/1405.0312},
181
+ year = {2014},
182
+ url = {http://arxiv.org/abs/1405.0312},
183
+ archivePrefix = {arXiv},
184
+ eprint = {1405.0312},
185
+ timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
186
+ biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
187
+ bibsource = {dblp computer science bibliography, https://dblp.org}
188
+ }
189
+ ```
190
+
191
+ ### Contributions
192
+
193
+ Thanks to [@github-whyen-wang](https://github.com/whyen-wang) for adding this dataset.
coco_captions.py ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import datasets
3
+ from pathlib import Path
4
+
5
+ _HOMEPAGE = 'https://cocodataset.org/'
6
+ _LICENSE = 'Creative Commons Attribution 4.0 License'
7
+ _DESCRIPTION = 'COCO is a large-scale object detection, segmentation, and captioning dataset.'
8
+ _CITATION = '''\
9
+ @article{cocodataset,
10
+ author = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and Lubomir D. Bourdev and Ross B. Girshick and James Hays and Pietro Perona and Deva Ramanan and Piotr Doll{'{a} }r and C. Lawrence Zitnick},
11
+ title = {Microsoft {COCO:} Common Objects in Context},
12
+ journal = {CoRR},
13
+ volume = {abs/1405.0312},
14
+ year = {2014},
15
+ url = {http://arxiv.org/abs/1405.0312},
16
+ archivePrefix = {arXiv},
17
+ eprint = {1405.0312},
18
+ timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
19
+ biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
20
+ bibsource = {dblp computer science bibliography, https://dblp.org}
21
+ }
22
+ '''
23
+
24
+
25
+ class COCOCaptionsConfig(datasets.BuilderConfig):
26
+ '''Builder Config for coco2017'''
27
+
28
+ def __init__(
29
+ self, description, homepage,
30
+ annotation_urls, **kwargs
31
+ ):
32
+ super(COCOCaptionsConfig, self).__init__(
33
+ version=datasets.Version('1.0.0', ''),
34
+ **kwargs
35
+ )
36
+ self.description = description
37
+ self.homepage = homepage
38
+ url = 'http://images.cocodataset.org/zips/'
39
+ self.train_image_url = url + 'train2017.zip'
40
+ self.val_image_url = url + 'val2017.zip'
41
+ self.annotation_urls = annotation_urls
42
+
43
+
44
+ class COCOCaptions(datasets.GeneratorBasedBuilder):
45
+ BUILDER_CONFIGS = [
46
+ COCOCaptionsConfig(
47
+ description=_DESCRIPTION,
48
+ homepage=_HOMEPAGE,
49
+ annotation_urls={
50
+ 'train': 'data/captions_train.jsonl',
51
+ 'validation': 'data/captions_validation.jsonl'
52
+ },
53
+ )
54
+ ]
55
+
56
+ def _info(self):
57
+ features = datasets.Features({
58
+ 'image': datasets.Image(mode='RGB', decode=True, id=None),
59
+ 'captions': datasets.Sequence(
60
+ feature=datasets.Value(dtype='string', id=None),
61
+ length=-1, id=None
62
+ )
63
+ })
64
+ return datasets.DatasetInfo(
65
+ description=_DESCRIPTION,
66
+ features=features,
67
+ homepage=_HOMEPAGE,
68
+ license=_LICENSE,
69
+ citation=_CITATION
70
+ )
71
+
72
+ def _split_generators(self, dl_manager):
73
+ train_image_path = dl_manager.download_and_extract(
74
+ self.config.train_image_url
75
+ )
76
+ validation_image_path = dl_manager.download_and_extract(
77
+ self.config.val_image_url
78
+ )
79
+ annotation_paths = dl_manager.download_and_extract(
80
+ self.config.annotation_urls
81
+ )
82
+ return [
83
+ datasets.SplitGenerator(
84
+ name=datasets.Split.TRAIN,
85
+ gen_kwargs={
86
+ 'image_path': f'{train_image_path}/train2017',
87
+ 'annotation_path': annotation_paths['train']
88
+ }
89
+ ),
90
+ datasets.SplitGenerator(
91
+ name=datasets.Split.VALIDATION,
92
+ gen_kwargs={
93
+ 'image_path': f'{validation_image_path}/val2017',
94
+ 'annotation_path': annotation_paths['validation']
95
+ }
96
+ )
97
+ ]
98
+
99
+ def _generate_examples(self, image_path, annotation_path):
100
+ idx = 0
101
+ image_path = Path(image_path)
102
+ with open(annotation_path, 'r', encoding='utf-8') as f:
103
+ for line in f:
104
+ obj = json.loads(line.strip())
105
+ example = {
106
+ 'image': str(image_path / obj['image']),
107
+ 'captions': obj['captions']
108
+ }
109
+ yield idx, example
110
+ idx += 1
data/captions_train.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14b9a0fc5c5c1de4f85c186756fa16eb8f20c7fd39ad620868d977e932240fe1
3
+ size 10150166
data/captions_validation.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb39c6b91b7362e386f29e085274080b519bd8eeb67c7c3d5aa28e240422c2b6
3
+ size 456607
prepare.ipynb ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "metadata": {},
7
+ "outputs": [],
8
+ "source": [
9
+ "import os\n",
10
+ "import zipfile\n",
11
+ "import requests\n",
12
+ "import jsonlines\n",
13
+ "from tqdm import tqdm\n",
14
+ "from pathlib import Path\n",
15
+ "from pycocotools.coco import COCO\n"
16
+ ]
17
+ },
18
+ {
19
+ "cell_type": "markdown",
20
+ "metadata": {},
21
+ "source": [
22
+ "# Download Annotations"
23
+ ]
24
+ },
25
+ {
26
+ "cell_type": "code",
27
+ "execution_count": null,
28
+ "metadata": {},
29
+ "outputs": [],
30
+ "source": [
31
+ "url = 'http://images.cocodataset.org/annotations/'\n",
32
+ "file = 'annotations_trainval2017.zip'\n",
33
+ "if not Path(f'./{file}').exists():\n",
34
+ " response = requests.get(url + file)\n",
35
+ " with open(file, 'wb') as f:\n",
36
+ " f.write(response.content)\n",
37
+ "\n",
38
+ " with zipfile.ZipFile(file, 'r') as zipf:\n",
39
+ " zipf.extractall(Path())\n"
40
+ ]
41
+ },
42
+ {
43
+ "cell_type": "markdown",
44
+ "metadata": {},
45
+ "source": [
46
+ "# Read annotations"
47
+ ]
48
+ },
49
+ {
50
+ "cell_type": "code",
51
+ "execution_count": null,
52
+ "metadata": {},
53
+ "outputs": [],
54
+ "source": [
55
+ "coco91_to_coco80 = [\n",
56
+ " None, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, None,\n",
57
+ " 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,\n",
58
+ " 23, None, 24, 25, None, None, 26, 27, 28, 29, 30,\n",
59
+ " 31, 32, 33, 34, 35, 36, 37, 38, 39, None, 40, 41,\n",
60
+ " 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,\n",
61
+ " 55, 56, 57, 58, 59, None, 60, None, None, 61, None,\n",
62
+ " 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, None,\n",
63
+ " 73, 74, 75, 76, 77, 78, 79\n",
64
+ "]"
65
+ ]
66
+ },
67
+ {
68
+ "cell_type": "markdown",
69
+ "metadata": {},
70
+ "source": [
71
+ "## Image Captioning Task"
72
+ ]
73
+ },
74
+ {
75
+ "cell_type": "code",
76
+ "execution_count": null,
77
+ "metadata": {},
78
+ "outputs": [],
79
+ "source": [
80
+ "train_data = COCO('annotations/captions_train2017.json')\n",
81
+ "val_data = COCO('annotations/captions_val2017.json')"
82
+ ]
83
+ },
84
+ {
85
+ "cell_type": "code",
86
+ "execution_count": null,
87
+ "metadata": {},
88
+ "outputs": [],
89
+ "source": [
90
+ "for split, data in zip(['train', 'validation'], [train_data, val_data]):\n",
91
+ " with jsonlines.open(f'data/captions_{split}.jsonl', mode='w') as writer:\n",
92
+ " for image_id, image_info in tqdm(data.imgs.items()):\n",
93
+ " captions = []\n",
94
+ " anns = data.imgToAnns[image_id]\n",
95
+ " for ann in anns:\n",
96
+ " captions.append(ann['caption'])\n",
97
+ " writer.write({\n",
98
+ " 'image': image_info['file_name'], 'captions': captions\n",
99
+ " })"
100
+ ]
101
+ },
102
+ {
103
+ "cell_type": "code",
104
+ "execution_count": 2,
105
+ "metadata": {},
106
+ "outputs": [],
107
+ "source": [
108
+ "for split in ['train', 'validation']:\n",
109
+ " file_path = f'data/captions_{split}.jsonl'\n",
110
+ " with zipfile.ZipFile(f'data/captions_{split}.zip', 'w', zipfile.ZIP_DEFLATED) as zipf:\n",
111
+ " zipf.write(file_path, os.path.basename(file_path))"
112
+ ]
113
+ }
114
+ ],
115
+ "metadata": {
116
+ "kernelspec": {
117
+ "display_name": ".venv",
118
+ "language": "python",
119
+ "name": "python3"
120
+ },
121
+ "language_info": {
122
+ "codemirror_mode": {
123
+ "name": "ipython",
124
+ "version": 3
125
+ },
126
+ "file_extension": ".py",
127
+ "mimetype": "text/x-python",
128
+ "name": "python",
129
+ "nbconvert_exporter": "python",
130
+ "pygments_lexer": "ipython3",
131
+ "version": "3.12.2"
132
+ }
133
+ },
134
+ "nbformat": 4,
135
+ "nbformat_minor": 2
136
+ }