Update README.md
Browse files
README.md
CHANGED
|
@@ -141,79 +141,94 @@ pretty_name: Danbooru 2025 Metadata
|
|
| 141 |
size_categories:
|
| 142 |
- 1M<n<10M
|
| 143 |
---
|
| 144 |
-
# Dataset Card for Danbooru 2025 Metadata
|
| 145 |
|
| 146 |
-
|
|
|
|
|
|
|
| 147 |
|
| 148 |
-
|
| 149 |
|
| 150 |
-
|
|
|
|
|
|
|
|
|
|
| 151 |
|
| 152 |
-
|
| 153 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 154 |
|
| 155 |
-
|
| 156 |
|
| 157 |
-
|
| 158 |
-
- **Improved Tag Accuracy**: Historical tag renames and additions are accurately reflected, reducing the potential mismatch or redundancy often found in older metadata dumps.
|
| 159 |
-
- **Less AI Noise**: Compared to many legacy scrapes, the 2025 data incorporates updated annotations and filters out many unlabeled AI-generated images.
|
| 160 |
|
| 161 |
-
|
| 162 |
|
| 163 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 164 |
|
| 165 |
```python
|
| 166 |
from datasets import load_dataset
|
| 167 |
-
|
| 168 |
danbooru_metadata = load_dataset("trojblue/danbooru2025-metadata", split="train")
|
| 169 |
df = danbooru_metadata.to_pandas()
|
| 170 |
```
|
| 171 |
|
| 172 |
-
|
| 173 |
|
| 174 |
-
|
|
|
|
|
|
|
|
|
|
| 175 |
|
| 176 |
-
|
| 177 |
|
| 178 |
-
|
| 179 |
-
Index([
|
| 180 |
-
'approver_id', 'bit_flags', 'created_at', 'down_score', 'fav_count',
|
| 181 |
-
'file_ext', 'file_size', 'file_url', 'has_active_children', 'has_children',
|
| 182 |
-
'has_large', 'has_visible_children', 'id', 'image_height', 'image_width',
|
| 183 |
-
'is_banned', 'is_deleted', 'is_flagged', 'is_pending', 'large_file_url',
|
| 184 |
-
'last_comment_bumped_at', 'last_commented_at', 'last_noted_at', 'md5',
|
| 185 |
-
'media_asset_created_at', 'media_asset_duration', 'media_asset_file_ext',
|
| 186 |
-
'media_asset_file_key', 'media_asset_file_size', 'media_asset_id',
|
| 187 |
-
'media_asset_image_height', 'media_asset_image_width',
|
| 188 |
-
'media_asset_is_public', 'media_asset_md5', 'media_asset_pixel_hash',
|
| 189 |
-
'media_asset_status', 'media_asset_updated_at', 'media_asset_variants',
|
| 190 |
-
'parent_id', 'pixiv_id', 'preview_file_url', 'rating', 'score', 'source',
|
| 191 |
-
'tag_count', 'tag_count_artist', 'tag_count_character',
|
| 192 |
-
'tag_count_copyright', 'tag_count_general', 'tag_count_meta', 'tag_string',
|
| 193 |
-
'tag_string_artist', 'tag_string_character', 'tag_string_copyright',
|
| 194 |
-
'tag_string_general', 'tag_string_meta', 'up_score', 'updated_at',
|
| 195 |
-
'uploader_id'
|
| 196 |
-
], dtype='object')
|
| 197 |
-
```
|
| 198 |
|
| 199 |
-
##
|
| 200 |
|
| 201 |
-
**
|
|
|
|
|
|
|
| 202 |
|
| 203 |
-
|
| 204 |
-
- Certain restricted tags (e.g., `loli`) are inaccessible without special permissions and are therefore absent in this dataset.
|
| 205 |
-
- If you require more comprehensive metadata (including hidden or restricted tags), consider merging this data with older scrapes such as Danbooru2021.
|
| 206 |
|
| 207 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 208 |
|
| 209 |
```python
|
| 210 |
import pandas as pd
|
| 211 |
from pandarallel import pandarallel
|
| 212 |
|
|
|
|
| 213 |
pandarallel.initialize(nb_workers=4, progress_bar=True)
|
| 214 |
|
| 215 |
def flatten_dict(d, parent_key='', sep='_'):
|
| 216 |
-
"""Recursively flattens a nested dictionary."""
|
| 217 |
items = []
|
| 218 |
for k, v in d.items():
|
| 219 |
new_key = f"{parent_key}{sep}{k}" if parent_key else k
|
|
@@ -226,27 +241,99 @@ def flatten_dict(d, parent_key='', sep='_'):
|
|
| 226 |
return dict(items)
|
| 227 |
|
| 228 |
def extract_all_illust_info(json_content):
|
| 229 |
-
|
| 230 |
-
flattened_data = flatten_dict(json_content)
|
| 231 |
-
return pd.Series(flattened_data)
|
| 232 |
|
| 233 |
def dicts_to_dataframe_parallel(dicts):
|
| 234 |
-
"""Converts a list of dicts to a flattened DataFrame using pandarallel."""
|
| 235 |
df = pd.DataFrame(dicts)
|
| 236 |
-
|
| 237 |
-
|
|
|
|
| 238 |
```
|
| 239 |
|
| 240 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 241 |
|
| 242 |
-
- **
|
| 243 |
-
-
|
| 244 |
-
- **
|
| 245 |
-
- **
|
|
|
|
|
|
|
|
|
|
| 246 |
|
| 247 |
-
|
| 248 |
-
If you use this dataset in research or production, please cite appropriately and abide by all relevant terms and conditions.
|
| 249 |
|
| 250 |
-
|
|
|
|
|
|
|
| 251 |
|
| 252 |
-
|
|
|
|
| 141 |
size_categories:
|
| 142 |
- 1M<n<10M
|
| 143 |
---
|
|
|
|
| 144 |
|
| 145 |
+
<p align="center">
|
| 146 |
+
<img src="https://huggingface.co/datasets/trojblue/danbooru2025-metadata/resolve/main/57931572.png" alt="Danbooru Logo" width="120"/>
|
| 147 |
+
</p>
|
| 148 |
|
| 149 |
+
<h1 align="center">🎨 Danbooru 2025 Metadata</h1>
|
| 150 |
|
| 151 |
+
<p align="center">
|
| 152 |
+
<strong>Latest Post ID:</strong> <code>9,158,800</code><br/>
|
| 153 |
+
<em>(as of Apr 16, 2025)</em>
|
| 154 |
+
</p>
|
| 155 |
|
| 156 |
+
---
|
| 157 |
+
|
| 158 |
+
📁 **About the Dataset**
|
| 159 |
+
This dataset provides structured metadata for user-submitted images on **Danbooru**, a large-scale imageboard focused on anime-style artwork.
|
| 160 |
+
|
| 161 |
+
Scraping began on **January 2, 2025**, and the data are stored in **Parquet** format for efficient programmatic access.
|
| 162 |
+
Compared to earlier versions, this snapshot includes:
|
| 163 |
+
|
| 164 |
+
- More consistent tag history tracking
|
| 165 |
+
- Better coverage of older or previously skipped posts
|
| 166 |
+
- Reduced presence of unlabeled AI-generated entries
|
| 167 |
+
|
| 168 |
+
---
|
| 169 |
+
|
| 170 |
+
## Dataset Overview
|
| 171 |
+
|
| 172 |
+
Each row corresponds to a Danbooru post, with fields including:
|
| 173 |
+
|
| 174 |
+
- Tag list (both general and system-specific)
|
| 175 |
+
- Upload timestamp
|
| 176 |
+
- File details (size, extension, resolution)
|
| 177 |
+
- User stats (favorites, score, etc.)
|
| 178 |
|
| 179 |
+
The schema follows Danbooru’s public API structure, and should be familiar to anyone who has worked with their JSON output.
|
| 180 |
|
| 181 |
+
**File Format**
|
|
|
|
|
|
|
| 182 |
|
| 183 |
+
The metadata are stored in a flat table. Nested dictionaries have been flattened using a consistent naming scheme (`parentkey_childkey`) to aid downstream use in ML pipelines or indexing tools.
|
| 184 |
|
| 185 |
+
---
|
| 186 |
+
|
| 187 |
+
## Access & Usage
|
| 188 |
+
|
| 189 |
+
You can load the dataset via the Hugging Face `datasets` library:
|
| 190 |
|
| 191 |
```python
|
| 192 |
from datasets import load_dataset
|
|
|
|
| 193 |
danbooru_metadata = load_dataset("trojblue/danbooru2025-metadata", split="train")
|
| 194 |
df = danbooru_metadata.to_pandas()
|
| 195 |
```
|
| 196 |
|
| 197 |
+
Potential use cases include:
|
| 198 |
|
| 199 |
+
- Image retrieval systems
|
| 200 |
+
- Text-to-image alignment tasks
|
| 201 |
+
- Dataset curation or filtering
|
| 202 |
+
- Historical or cultural analysis of trends in tagging
|
| 203 |
|
| 204 |
+
Be cautious if working in public settings. The dataset contains adult content.
|
| 205 |
|
| 206 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 207 |
|
| 208 |
+
## Notable Characteristics
|
| 209 |
|
| 210 |
+
- **Single-Snapshot Coverage**: All posts up to the stated ID are included. No need to merge partial scrapes.
|
| 211 |
+
- **Reduced Tag Drift**: Many historic tag renames and merges are reflected correctly.
|
| 212 |
+
- **Filtered AI-Generated Posts**: Some attempts were made to identify and exclude unlabeled AI-generated entries, though the process is imperfect.
|
| 213 |
|
| 214 |
+
Restricted tags (e.g., certain content filters) are inaccessible without privileged API keys and are therefore missing here.
|
|
|
|
|
|
|
| 215 |
|
| 216 |
+
If you need metadata with those tags, you’ll need to integrate previous datasets (such as Danbooru2021) and resolve inconsistencies manually.
|
| 217 |
+
|
| 218 |
+
---
|
| 219 |
+
|
| 220 |
+
## Code: Flattening the JSON
|
| 221 |
+
|
| 222 |
+
Included below is a simplified example showing how the raw JSON was transformed:
|
| 223 |
|
| 224 |
```python
|
| 225 |
import pandas as pd
|
| 226 |
from pandarallel import pandarallel
|
| 227 |
|
| 228 |
+
# Initialize multiprocessing
|
| 229 |
pandarallel.initialize(nb_workers=4, progress_bar=True)
|
| 230 |
|
| 231 |
def flatten_dict(d, parent_key='', sep='_'):
|
|
|
|
| 232 |
items = []
|
| 233 |
for k, v in d.items():
|
| 234 |
new_key = f"{parent_key}{sep}{k}" if parent_key else k
|
|
|
|
| 241 |
return dict(items)
|
| 242 |
|
| 243 |
def extract_all_illust_info(json_content):
|
| 244 |
+
return pd.Series(flatten_dict(json_content))
|
|
|
|
|
|
|
| 245 |
|
| 246 |
def dicts_to_dataframe_parallel(dicts):
|
|
|
|
| 247 |
df = pd.DataFrame(dicts)
|
| 248 |
+
return df.parallel_apply(lambda row:
|
| 249 |
+
|
| 250 |
+
extract_all_illust_info(row.to_dict()), axis=1)
|
| 251 |
```
|
| 252 |
|
| 253 |
+
---
|
| 254 |
+
|
| 255 |
+
## Warnings & Considerations
|
| 256 |
+
|
| 257 |
+
- **NSFW Material**: Includes sexually explicit tags or content. Do not deploy without clear filtering and compliance checks.
|
| 258 |
+
- **Community Bias**: Tags are user-generated and reflect collective subjectivity. Representation may skew or omit.
|
| 259 |
+
- **Data Licensing**: Image rights remain with original uploaders. This dataset includes metadata only, not media. Review Danbooru’s [Terms of Service](https://danbooru.donmai.us/static/terms_of_service) for reuse constraints.
|
| 260 |
+
- **Missing Content**: Posts with restricted tags or deleted content may appear with incomplete fields or be absent entirely.
|
| 261 |
+
|
| 262 |
+
---
|
| 263 |
+
|
| 264 |
+
## Column Summaries (Sample — Apr 16, 2025)
|
| 265 |
+
|
| 266 |
+
Full schema and additional statistics are viewable on the Hugging Face Dataset Viewer.
|
| 267 |
+
|
| 268 |
+
### File Information
|
| 269 |
+
- **file_url**: 8.8 million unique file links
|
| 270 |
+
- **file_ext**: 9 file types
|
| 271 |
+
- `'jpg'`: 73.3%
|
| 272 |
+
- `'png'`: 25.4%
|
| 273 |
+
- Other types (`mp4`, `gif`, `zip`, etc.): <1.5% combined
|
| 274 |
+
- **file_size** and **media_asset_file_size** (bytes):
|
| 275 |
+
- Min: 49
|
| 276 |
+
- Max: ~106MB
|
| 277 |
+
- Avg: ~1.5MB
|
| 278 |
+
|
| 279 |
+
### Image Dimensions
|
| 280 |
+
- **image_width**:
|
| 281 |
+
- Min: 1 px
|
| 282 |
+
- Max: 35,102 px
|
| 283 |
+
- Mean: 1,471 px
|
| 284 |
+
- **image_height**:
|
| 285 |
+
- Min: 1 px
|
| 286 |
+
- Max: 54,250 px
|
| 287 |
+
- Mean: 1,760 px
|
| 288 |
+
|
| 289 |
+
(Note: extremely small dimensions may indicate deleted or broken images.)
|
| 290 |
+
|
| 291 |
+
### Scoring and Engagement
|
| 292 |
+
- **score** (net = up − down):
|
| 293 |
+
- Min: −167
|
| 294 |
+
- Max: 2,693
|
| 295 |
+
- Mean: 26.15
|
| 296 |
+
- **up_score**:
|
| 297 |
+
- Max: 2,700
|
| 298 |
+
- Mean: 25.87
|
| 299 |
+
- **down_score**:
|
| 300 |
+
- Min: −179
|
| 301 |
+
- Mean: −0.24
|
| 302 |
+
- **fav_count**:
|
| 303 |
+
- Max: 4,458
|
| 304 |
+
- Mean: 32.49
|
| 305 |
+
|
| 306 |
+
### Rating and Moderation
|
| 307 |
+
- **rating**:
|
| 308 |
+
- `'s'` (safe): 49.5%
|
| 309 |
+
- `'g'` (general but not safe): 29.4%
|
| 310 |
+
- `'q'` (questionable): 11.2%
|
| 311 |
+
- `'e'` (explicit): 9.8%
|
| 312 |
+
- **is_banned**: 1.13% true
|
| 313 |
+
- **is_deleted**: 5.34% true
|
| 314 |
+
- **is_flagged / is_pending**: <0.01% true (rare moderation edge-cases)
|
| 315 |
+
|
| 316 |
+
### Children & Variations
|
| 317 |
+
- **has_children**: 10.7%
|
| 318 |
+
- **has_active_children**: 10.0%
|
| 319 |
+
- **has_visible_children**: 10.3%
|
| 320 |
+
- **has_large**: 70.6% of posts are linked to full-res versions
|
| 321 |
+
|
| 322 |
+
### Tag Breakdown
|
| 323 |
+
(Tag counts are per post; some posts may have hundreds.)
|
| 324 |
|
| 325 |
+
- **tag_count** (total tags):
|
| 326 |
+
- Avg: 36.3
|
| 327 |
+
- **tag_count_artist**: 0.99 avg
|
| 328 |
+
- **tag_count_character**: 1.62 avg
|
| 329 |
+
- **tag_count_copyright**: 1.39 avg
|
| 330 |
+
- **tag_count_general**: 30.0 avg
|
| 331 |
+
- **tag_count_meta**: 2.3 avg
|
| 332 |
|
| 333 |
+
Some outliers contain hundreds of tags—up to 1,250 in total on rare posts.
|
|
|
|
| 334 |
|
| 335 |
+
### Other Fields
|
| 336 |
+
- **uploader_id** (anonymized integer ID)
|
| 337 |
+
- **updated_at** (timestamp) — nearly every post has a unique update time
|
| 338 |
|
| 339 |
+
(last updated: 2025-04-16 12:15:29.262308)
|