deepsynth-en-xsum / README.md
baconnier's picture
Add comprehensive dataset card with multi-resolution documentation
91d7ca8 verified
metadata
language:
  - en
size_categories:
  - 10K<n<100K
task_categories:
  - summarization
  - image-to-text
  - text-generation
tags:
  - summarization
  - vision
  - DeepSeek-OCR
  - multilingual
  - visual-text-encoding
  - random-augmentation
library_name: datasets
license: mit
pretty_name: XSum BBC News Summarization
dataset_info:
  features:
    - name: text
      dtype: string
    - name: summary
      dtype: string
    - name: image
      dtype: image
    - name: source_dataset
      dtype: string
    - name: original_split
      dtype: string
    - name: original_index
      dtype: int64
  splits:
    - name: train
      num_examples: 204017

DeepSynth - XSum BBC News Summarization

Dataset Description

BBC news articles with single-sentence summaries. Focused on extreme summarization where the summary is a single sentence capturing the essence of the article.

This dataset is part of the DeepSynth project, which uses visual text encoding for multilingual summarization with the DeepSeek-OCR vision-language model. Text documents are converted into images and processed through a frozen 380M parameter visual encoder, enabling 20x token compression while preserving document layout and structure.

Key Features

  • Original High-Quality Images: Full-resolution images stored once, augmented on-the-fly during training
  • Random Augmentation Pipeline: Rotation, perspective, color jitter, and resize transforms for better generalization
  • Visual Text Encoding: 20x compression ratio (1 visual token ≈ 20 text tokens)
  • Document Structure Preservation: Layout and formatting maintained through image representation
  • Human-Written Summaries: High-quality reference summaries for each document
  • Deduplication Tracking: Source dataset and index tracking prevents duplicates

Dataset Statistics

  • Total Samples: ~50,000
  • Language(s): English
  • Domain: BBC news articles
  • Average Document Length: ~400 tokens
  • Average Summary Length: ~20 tokens (single sentence)

Source Dataset

Based on the XSum dataset from BBC articles (2010-2017).

Image Augmentation Pipeline

Images are stored at original resolution (up to 1600×2200) and augmented during training for better generalization:

Available Augmentation Transforms

  • Random Rotation: ±10° rotation for orientation invariance
  • Random Perspective: 0.1-0.2 distortion to simulate viewing angles
  • Random Resize: 512-1600px range for multi-scale learning
  • Color Jitter: Brightness, contrast, saturation adjustments (±20%)
  • Random Horizontal Flip: Optional (use with caution for text)

All transforms preserve aspect ratio with padding to maintain text readability. This approach:

  • Reduces storage: 6x less disk space (single image vs 6 resolutions)
  • Increases flexibility: Any resolution on-the-fly vs pre-computed fixed sizes
  • Improves generalization: Random transforms prevent overfitting to specific resolutions

Dataset Structure

Data Fields

  • text (string): Original document text
  • summary (string): Human-written summary
  • image (PIL.Image): Original full-size rendered document image (up to 1600×2200)
  • source_dataset (string): Origin dataset name
  • original_split (string): Source split (train/validation/test)
  • original_index (int): Original sample index for deduplication

Data Example

{
    'text': 'The government has announced new measures to...',
    'summary': 'Government unveils climate change plan.',
    'image': <PIL.Image>,  # Original resolution (up to 1600×2200)
    'source_dataset': 'Rexhaif/xsum_reduced',
    'original_split': 'train',
    'original_index': 0
}

Usage

Loading the Dataset

from datasets import load_dataset

# Load full dataset
dataset = load_dataset("baconnier/deepsynth-en-xsum")

# Streaming for large datasets
dataset = load_dataset("baconnier/deepsynth-en-xsum", streaming=True)

Training Example with DeepSeek-OCR and Augmentation

from transformers import AutoProcessor, AutoModelForVision2Seq
from datasets import load_dataset
from deepsynth.data.transforms import create_training_transform

# Load model and processor
model = AutoModelForVision2Seq.from_pretrained("deepseek-ai/DeepSeek-OCR")
processor = AutoProcessor.from_pretrained("deepseek-ai/DeepSeek-OCR")

# Load dataset
dataset = load_dataset("baconnier/deepsynth-en-xsum")

# Create augmentation pipeline (random rotation, perspective, resize, color jitter)
transform = create_training_transform(
    target_size_range=(512, 1600),  # Random resize range
    rotation_degrees=10,             # ±10° rotation
    perspective_distortion=0.1,      # Perspective transform
    brightness_factor=0.2,           # ±20% brightness
    contrast_factor=0.2,             # ±20% contrast
)

# Process sample with augmentation
sample = dataset['train'][0]
augmented_image = transform(sample['image'])  # Apply random transforms
inputs = processor(
    images=augmented_image,
    text=sample['text'],
    return_tensors="pt"
)

# Fine-tune decoder only (freeze encoder)
for param in model.encoder.parameters():
    param.requires_grad = False

# Training loop with on-the-fly augmentation...

Training Recommendations

DeepSeek-OCR Fine-Tuning

# Recommended hyperparameters with augmentation
training_args = {
    "learning_rate": 2e-5,
    "batch_size": 4,
    "gradient_accumulation_steps": 4,
    "num_epochs": 3,
    "mixed_precision": "bf16",
    "freeze_encoder": True,  # IMPORTANT: Only fine-tune decoder

    # Augmentation parameters
    "rotation_degrees": 10,           # Random rotation ±10°
    "perspective_distortion": 0.1,    # Perspective transform
    "resize_range": (512, 1600),      # Random resize 512-1600px
    "brightness_factor": 0.2,         # ±20% brightness
    "contrast_factor": 0.2,           # ±20% contrast
}

Expected Performance

  • Baseline (text-to-text): ROUGE-1 ~40-42
  • DeepSeek-OCR (visual): ROUGE-1 ~44-47 (typical SOTA)
  • Training Time: ~6-8 hours on A100 (80GB) for full dataset
  • GPU Memory: ~40GB with batch_size=4, mixed_precision=bf16

Dataset Creation

This dataset was created using the DeepSynth pipeline:

  1. Source Loading: Original text documents from Rexhaif/xsum_reduced
  2. Text-to-Image Conversion: Documents rendered as PNG images (DejaVu Sans 12pt, Unicode support)
  3. Original Resolution Storage: Full-quality images stored once (up to 1600×2200)
  4. Incremental Upload: Batches of 5,000 samples uploaded to HuggingFace Hub
  5. Deduplication: Source tracking prevents duplicate samples

Note: Images are augmented on-the-fly during training using random transformations (rotation, perspective, resize, color jitter) for better generalization across different resolutions and conditions.

Rendering Specifications

  • Font: DejaVu Sans 12pt (full Unicode support for multilingual text)
  • Line Wrapping: 100 characters per line
  • Margin: 40px
  • Background: White (255, 255, 255)
  • Text Color: Black (0, 0, 0)
  • Format: PNG with lossless compression

Citation

If you use this dataset in your research, please cite:

@misc{deepsynth-en-xsum,
    title={{DeepSynth XSum BBC News Summarization: Visual Text Encoding with Random Augmentation for Summarization}},
    author={Baconnier},
    year={2025},
    publisher={HuggingFace},
    howpublished={\url{https://huggingface.co/datasets/baconnier/deepsynth-en-xsum}}
}

Source Dataset Citation

@inproceedings{narayan2018don,
    title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
    author={Narayan, Shashi and Cohen, Shay B and Lapata, Mirella},
    booktitle={Proceedings of EMNLP},
    year={2018}
}

License

MIT License - See source dataset for full license terms.

Note: This dataset inherits the license from the original source dataset. Please review the source license before commercial use.

Limitations and Bias

  • Extreme summarization: Single-sentence summaries may lose important details
  • UK-centric: Primarily British news and perspectives
  • Short summaries: Not suitable for multi-sentence summary training
  • Temporal bias: Articles from 2010-2017

Additional Information

Dataset Curators

Created by the DeepSynth team as part of multilingual visual summarization research.

Contact

Acknowledgments

  • DeepSeek-OCR: Visual encoder from DeepSeek AI
  • Source Dataset: Rexhaif/xsum_reduced
  • HuggingFace: Dataset hosting and infrastructure