--- tags: - ocr - document-processing - deepseek - deepseek-ocr - markdown - uv-script - generated --- # Document OCR using DeepSeek-OCR This dataset contains markdown-formatted OCR results from images in [davanstrien/ufo-ColPali](https://huggingface.co/datasets/davanstrien/ufo-ColPali) using DeepSeek-OCR. ## Processing Details - **Source Dataset**: [davanstrien/ufo-ColPali](https://huggingface.co/datasets/davanstrien/ufo-ColPali) - **Model**: [deepseek-ai/DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) - **Number of Samples**: 10 - **Processing Time**: 1.6 min - **Processing Date**: 2025-10-22 16:47 UTC ### Configuration - **Image Column**: `image` - **Output Column**: `markdown` - **Dataset Split**: `train` - **Batch Size**: 8 - **Resolution Mode**: tiny - **Base Size**: 512 - **Image Size**: 512 - **Crop Mode**: False - **Max Model Length**: 8,192 tokens - **Max Output Tokens**: 8,192 - **GPU Memory Utilization**: 80.0% ## Model Information DeepSeek-OCR is a state-of-the-art document OCR model that excels at: - 📐 **LaTeX equations** - Mathematical formulas preserved in LaTeX format - 📊 **Tables** - Extracted and formatted as HTML/markdown - 📝 **Document structure** - Headers, lists, and formatting maintained - 🖼️ **Image grounding** - Spatial layout and bounding box information - 🔍 **Complex layouts** - Multi-column and hierarchical structures - 🌍 **Multilingual** - Supports multiple languages ### Resolution Modes - **Tiny** (512×512): Fast processing, 64 vision tokens - **Small** (640×640): Balanced speed/quality, 100 vision tokens - **Base** (1024×1024): High quality, 256 vision tokens - **Large** (1280×1280): Maximum quality, 400 vision tokens - **Gundam** (dynamic): Adaptive multi-tile processing for large documents ## Dataset Structure The dataset contains all original columns plus: - `markdown`: The extracted text in markdown format with preserved structure - `inference_info`: JSON list tracking all OCR models applied to this dataset ## Usage ```python from datasets import load_dataset import json # Load the dataset dataset = load_dataset("{{output_dataset_id}}", split="train") # Access the markdown text for example in dataset: print(example["markdown"]) break # View all OCR models applied to this dataset inference_info = json.loads(dataset[0]["inference_info"]) for info in inference_info: print(f"Column: {{info['column_name']}} - Model: {{info['model_id']}}") ``` ## Reproduction This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DeepSeek OCR vLLM script: ```bash uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr-vllm.py \\ davanstrien/ufo-ColPali \\ \\ --resolution-mode tiny \\ --image-column image ``` ## Performance - **Processing Speed**: ~0.1 images/second - **Processing Method**: Batch processing with vLLM (2-3x speedup over sequential) Generated with 🤖 [UV Scripts](https://huggingface.co/uv-scripts)