---
pipeline_tag: image-segmentation
---
This is a segmentation model trained for pancretic lesion segmentation, presented in the paper [Scaling Artificial Intelligence for Multi-Tumor Early Detection with More Reports, Fewer Masks](https://huggingface.co/papers/2510.14803). It was trained with the Report Supervision ([R-Super](https://github.com/MrGiovanni/R-Super), MICCAI 2025, best paper award runner-up) training methodology, which **learns tumor segmentation directly from radiology reports** (through new loss functions).
This checkpoint was trained with public data: **1.8K pancreatic lesion reports** from the [Merlin](https://stanfordaimi.azurewebsites.net/datasets?domain=BODY) dataset, plus **0.9K pancreatic lesion masks** from [PanTS](https://github.com/MrGiovanni/PanTS).
The AI model architecture is MedFormer, its training methology is Report Supervision (R-Super).
**Training and inference code: https://github.com/MrGiovanni/R-Super**
Label order
```yaml
- adrenal_gland_left
- adrenal_gland_right
- aorta
- bladder
- colon
- common_bile_duct
- duodenum
- femur_left
- femur_right
- gall_bladder
- kidney_left
- kidney_right
- liver
- lung_left
- lung_right
- pancreas
- pancreas_body
- pancreas_head
- pancreas_tail
- pancreatic_lesion
- postcava
- prostate
- spleen
- stomach
- superior_mesenteric_artery
- veins
```
---
# Papers
Scaling Artificial Intelligence for Multi-Tumor Early Detection with More Reports, Fewer Masks
[Pedro R. A. S. Bassi](https://scholar.google.com/citations?user=NftgL6gAAAAJ&hl=en), [Wenxuan Li](https://scholar.google.com/citations?hl=en&user=tpNZM2YAAAAJ), [Jieneng Chen](https://scholar.google.com/citations?user=yLYj88sAAAAJ&hl=zh-CN), Zheren Zhu, Tianyu Lin, [Sergio Decherchi](https://scholar.google.com/citations?user=T09qQ1IAAAAJ&hl=it), [Andrea Cavalli](https://scholar.google.com/citations?user=4xTOvaMAAAAJ&hl=en), [Kang Wang](https://radiology.ucsf.edu/people/kang-wang), [Yang Yang](https://scholar.google.com/citations?hl=en&user=6XsJUBIAAAAJ), [Alan Yuille](https://www.cs.jhu.edu/~ayuille/), [Zongwei Zhou](https://www.zongweiz.com/)*
*Johns Hopkins University*
Learning Segmentation from Radiology Reports
[Pedro R. A. S. Bassi](https://scholar.google.com/citations?user=NftgL6gAAAAJ&hl=en), [Wenxuan Li](https://scholar.google.com/citations?hl=en&user=tpNZM2YAAAAJ), [Jieneng Chen](https://scholar.google.com/citations?user=yLYj88sAAAAJ&hl=zh-CN), Zheren Zhu, Tianyu Lin, [Sergio Decherchi](https://scholar.google.com/citations?user=T09qQ1IAAAAJ&hl=it), [Andrea Cavalli](https://scholar.google.com/citations?user=4xTOvaMAAAAJ&hl=en), [Kang Wang](https://radiology.ucsf.edu/people/kang-wang), [Yang Yang](https://scholar.google.com/citations?hl=en&user=6XsJUBIAAAAJ), [Alan Yuille](https://www.cs.jhu.edu/~ayuille/), [Zongwei Zhou](https://www.zongweiz.com/)*
*Johns Hopkins University*
MICCAI 2025
Best Paper Award Runner-up (top 2 in 1,027 papers)

PanTS: The Pancreatic Tumor Segmentation Dataset
[Wenxuan Li](https://scholar.google.com/citations?hl=en&user=tpNZM2YAAAAJ), [Xinze Zhou](), [Qi Chen](), Tianyu Lin, Pedro R.A.S. Bassi, ..., [Alan Yuille](https://www.cs.jhu.edu/~ayuille/), [Zongwei Zhou](https://www.zongweiz.com/)★
*Johns Hopkins University*
Merlin: A vision language foundation model for 3d computed tomography
Louis Blankemeier, Joseph P. Cohen, Ashwin Kumar, ..., Akshay S. Chaudhari
*Stanford*
# Inference
**0- Download and installation.**
[Optional] Install Anaconda on Linux
```bash
wget https://repo.anaconda.com/archive/Anaconda3-2024.06-1-Linux-x86_64.sh
bash Anaconda3-2024.06-1-Linux-x86_64.sh -b -p ./anaconda3
./anaconda3/bin/conda init
source ~/.bashrc
```
```
git clone https://github.com/MrGiovanni/R-Super
cd R-Super/rsuper_train
conda create -n rsuper python=3.10
conda activate rsuper
pip install -r requirements.txt
pip install -U "huggingface_hub[cli]"
hf download AbdomenAtlas/R-SuperPanTSMerlin --local-dir ./R-SuperPanTSMerlin
```
**1- Pre-processing.** Prepare your dataset in the format below. You can use symlinks instead of copying your data.
Dataset format.
```
/path/to/dataset/
├── BDMAP_0000001
| └── ct.nii.gz
├── BDMAP_0000002
| └── ct.nii.gz
...
```
**2- Inference.** The code below will inference, generating binary segmentation masks. To save probabilities, add the argument --save_probabilities or --save_probabilities_lesions (which saves only probabilities for lesions, not for organs). The optional argument --organ_mask_on_lesion will use organ segmentations (produced by the R-Super model itself, not ground-truth) to remove tumor predictions outside its organ.
```bash
python predict_abdomenatlas.py --load R-SuperPanTSMerlin/merlin_pancreas_pants_release/fold_0_latest.pth --img_path /path/to/test/dataset/ --class_list R-SuperPanTSMerlin/labels_pants.yaml --save_path /path/to/inference/output/ --organ_mask_on_lesion
```
Argument Details
- load: path to the model checkpoint (fold_0_latest.pth)
- img_path: path to dataset
- class_list: a yaml file with the class names of your model
- save_path: path to output, where masks will be saved
- ids: this is an optional argument. By default, the code will predict on all cases in --img_path. If you pass ids, the code will only test with the CT scans indicated in ids. You can use this to separate a test set: --ids /path/to/test/set/ids.csv. The csv file must have a 'BDMAP ID' column with the ids of the test cases.
For more details, see https://github.com/MrGiovanni/R-Super/tree/main/rsuper_train#test
# Citations
If you use the code, data or methods in this repository, please cite:
```
@inproceedings{bassi2025learning,
title={Learning Segmentation from Radiology Reports},
author={Bassi, Pedro RAS and Li, Wenxuan and Chen, Jieneng and Zhu, Zheren and Lin, Tianyu and Decherchi, Sergio and Cavalli, Andrea and Wang, Kang and Yang, Yang and Yuille, Alan L and others},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={305--315},
year={2025},
organization={Springer}
}
@misc{bassi2025scaling,
title={Scaling Artificial Intelligence for Multi-Tumor Early Detection with More Reports, Fewer Masks},
author={Pedro R. A. S. Bassi and Xinze Zhou and Wenxuan Li and Szymon Płotka and Jieneng Chen and Qi Chen and Zheren Zhu and Jakub Prządo and Ibrahim E. Hamacı and Sezgin Er and Yuhan Wang and Ashwin Kumar and Bjoern Menze and Jarosław B. Ćwikła and Yuyin Zhou and Akshay S. Chaudhari and Curtis P. Langlotz and Sergio Decherchi and Andrea Cavalli and Kang Wang and Yang Yang and Alan L. Yuille and Zongwei Zhou},
year={2025},
eprint={2510.14803},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.14803},
}
@article{bassi2025radgpt,
title={Radgpt: Constructing 3d image-text tumor datasets},
author={Bassi, Pedro RAS and Yavuz, Mehmet Can and Wang, Kang and Chen, Xiaoxi and Li, Wenxuan and Decherchi, Sergio and Cavalli, Andrea and Yang, Yang and Yuille, Alan and Zhou, Zongwei},
journal={arXiv preprint arXiv:2501.04678},
year={2025}
}
```
## Acknowledgement
This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research, the Patrick J. McGovern Foundation Award, and the National Institutes of Health (NIH) under Award Number R01EB037669. We would like to thank the Johns Hopkins Research IT team in [IT@JH](https://researchit.jhu.edu/) for their support and infrastructure resources where some of these analyses were conducted; especially [DISCOVERY HPC](https://researchit.jhu.edu/research-hpc/). Paper content is covered by patents pending.