MedicalNarratives
Dataset Summary
MedicalNarratives is a large-scale, multimodal medical vision–language corpus built from pedagogical videos and scientific articles. It is designed to unify semantic (classification, retrieval, captioning) and dense (detection, segmentation, grounding) tasks across clinical imaging.
Key properties:
- 4.7M image–text pairs
- ≈1M samples with cursor-trace or bounding-box spatial grounding
- 12 imaging domains including X-ray, CT, MRI, ultrasound, histopathology, dermatology, etc.
- UMLS grounding for medical and ROI text
- Sources:
- 74k narrated YouTube pedagogy videos (~4,526 hours)
- ≈273k PubMed Central OA articles with ≈1.03M figure–caption pairs
Project page: https://medical-narratives.github.io
Supported Tasks
- Vision–language pretraining
- Zero-shot & supervised classification
- Image–text retrieval
- Grounded captioning / visual grounding
- Open-vocabulary detection and segmentation
Dataset Structure
General Fields
uid: str— Unique ID{VIDEO_ID}_{IDX}or{PMCID}index: int— Global dataset indeximage: str— Image filepathtext: List[str]— Denoised captionsclip_name: str— Filename of video clip corresponding to imageclip_name: str— Video clip identifiermed_num: int— Number of captionsvideo: bool— Video-derived samplearticle: bool— Article-derived samplesample_index: int— Index within source videodomains: str— Imaging domain list as stringsample_uids: Dict[str, int]— Mapping of sibling samples in same videosample_count: int— Total samples from this videomultidomain: bool— True if >1 modalitymultisample: bool— True if sample_count >1
Video-Specific Fields
video_id: str— YouTube IDvideo_link: str— YouTube URLvideo_channel: str— Channel nameduration: float— Secondsfps: float— Frames per secondspw: float— Seconds per word (ASR alignment)pair_chunk_time: float— Target chunk durationchunk_times: List[List[float]]— Stable-scene intervalsroi_chunk_times: List[List[float]]— ROI-activity intervalswidth: int,height: int— Frame dimensionschunk_id: int— Chunk indexchunk_start_time: float,chunk_end_time: float— Timestampschunk_medical_text: List[str]— Chunk-level medical textchunk_roi_text: List[str]— ROI textchunk_med_umls: List— UMLS for medical textchunk_roi_umls: List— UMLS for ROI textnoisy_timed_captions: List[Dict]— Raw ASR transcriptasr_correction_dict: Dict[str, str]— ASR correctionsumls: List[List[Dict]]— UMLS for each caption intextmagnification— Video magnification (usually null)img_time— Processing field (ignore)roi— ROI spatial annotations (boxes/masks)traces— Cursor traces(x, y, t)subdomain: List[str]— Fine-grained subdomains
Article-Specific Keys
PMCID: str— PubMed Central article IDpubmed_link: str— Article URL (https://www.ncbi.nlm.nih.gov/pmc/articles/{PMCID}/)img_xref_id: str— Figure or subfigure cross-reference ID from the articlepubmed_caption: str— Cleaned full figure captionpubmed_caption_raw: str— Raw figure caption before processingimg_inline_sentences: List[str]— Article sentences referencing the figuresubcaptions: List[Dict]— Subfigure caption entries (with image filenames normalized)subfigures: Dict[str, Any]— Mapping from subfigure image filename → metadata
Data Properties
- Mean image resolution: ~1521 × 903 px
- Cursor-trace subset: >100M points, ~546k bounding boxes
- Mean caption: 22.4 words, 1.9 sentences
- ~4M UMLS mentions, ~300k unique concepts
Citation
If you use MedicalNarratives, please cite:
@misc{ikezogwo2025medicalnarrativesconnectingmedicalvision,
title = {MedicalNarratives: Connecting Medical Vision and Language with Localized Narratives},
author = {Ikezogwo, Wisdom O. and Zhang, Kevin and Seyfioglu, Mehmet Saygin and Ghezloo, Fatemeh and Shapiro, Linda and Krishna, Ranjay},
year = {2025},
eprint = {2501.04184},
archivePrefix= {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2501.04184}
}
- Downloads last month
- 9