You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

MedicalNarratives

Dataset Summary

MedicalNarratives is a large-scale, multimodal medical vision–language corpus built from pedagogical videos and scientific articles. It is designed to unify semantic (classification, retrieval, captioning) and dense (detection, segmentation, grounding) tasks across clinical imaging.

Key properties:

  • 4.7M image–text pairs
  • ≈1M samples with cursor-trace or bounding-box spatial grounding
  • 12 imaging domains including X-ray, CT, MRI, ultrasound, histopathology, dermatology, etc.
  • UMLS grounding for medical and ROI text
  • Sources:
    • 74k narrated YouTube pedagogy videos (~4,526 hours)
    • ≈273k PubMed Central OA articles with ≈1.03M figure–caption pairs

Project page: https://medical-narratives.github.io


Supported Tasks

  • Vision–language pretraining
  • Zero-shot & supervised classification
  • Image–text retrieval
  • Grounded captioning / visual grounding
  • Open-vocabulary detection and segmentation

Dataset Structure

General Fields

  • uid: str — Unique ID {VIDEO_ID}_{IDX} or {PMCID}
  • index: int — Global dataset index
  • image: str — Image filepath
  • text: List[str] — Denoised captions
  • clip_name: str — Filename of video clip corresponding to image
  • clip_name: str — Video clip identifier
  • med_num: int — Number of captions
  • video: bool — Video-derived sample
  • article: bool — Article-derived sample
  • sample_index: int — Index within source video
  • domains: str — Imaging domain list as string
  • sample_uids: Dict[str, int] — Mapping of sibling samples in same video
  • sample_count: int — Total samples from this video
  • multidomain: bool — True if >1 modality
  • multisample: bool — True if sample_count >1

Video-Specific Fields

  • video_id: str — YouTube ID
  • video_link: str — YouTube URL
  • video_channel: str — Channel name
  • duration: float — Seconds
  • fps: float — Frames per second
  • spw: float — Seconds per word (ASR alignment)
  • pair_chunk_time: float — Target chunk duration
  • chunk_times: List[List[float]] — Stable-scene intervals
  • roi_chunk_times: List[List[float]] — ROI-activity intervals
  • width: int, height: int — Frame dimensions
  • chunk_id: int — Chunk index
  • chunk_start_time: float, chunk_end_time: float — Timestamps
  • chunk_medical_text: List[str] — Chunk-level medical text
  • chunk_roi_text: List[str] — ROI text
  • chunk_med_umls: List — UMLS for medical text
  • chunk_roi_umls: List — UMLS for ROI text
  • noisy_timed_captions: List[Dict] — Raw ASR transcript
  • asr_correction_dict: Dict[str, str] — ASR corrections
  • umls: List[List[Dict]] — UMLS for each caption in text
  • magnification — Video magnification (usually null)
  • img_time — Processing field (ignore)
  • roi — ROI spatial annotations (boxes/masks)
  • traces — Cursor traces (x, y, t)
  • subdomain: List[str] — Fine-grained subdomains

Article-Specific Keys

  • PMCID: str — PubMed Central article ID
  • pubmed_link: str — Article URL (https://www.ncbi.nlm.nih.gov/pmc/articles/{PMCID}/)
  • img_xref_id: str — Figure or subfigure cross-reference ID from the article
  • pubmed_caption: str — Cleaned full figure caption
  • pubmed_caption_raw: str — Raw figure caption before processing
  • img_inline_sentences: List[str] — Article sentences referencing the figure
  • subcaptions: List[Dict] — Subfigure caption entries (with image filenames normalized)
  • subfigures: Dict[str, Any] — Mapping from subfigure image filename → metadata

Data Properties

  • Mean image resolution: ~1521 × 903 px
  • Cursor-trace subset: >100M points, ~546k bounding boxes
  • Mean caption: 22.4 words, 1.9 sentences
  • ~4M UMLS mentions, ~300k unique concepts

Citation

If you use MedicalNarratives, please cite:

@misc{ikezogwo2025medicalnarrativesconnectingmedicalvision,
  title        = {MedicalNarratives: Connecting Medical Vision and Language with Localized Narratives},
  author       = {Ikezogwo, Wisdom O. and Zhang, Kevin and Seyfioglu, Mehmet Saygin and Ghezloo, Fatemeh and Shapiro, Linda and Krishna, Ranjay},
  year         = {2025},
  eprint       = {2501.04184},
  archivePrefix= {arXiv},
  primaryClass = {cs.CV},
  url          = {https://arxiv.org/abs/2501.04184}
}
Downloads last month
9