--- library_name: pytorch license: other tags: - low-level-vision - all-in-one image-restoration language: - en pipeline_tag: image-to-image model-index: - name: RAM / RAM++ results: - task: type: image-to-image name: All-in-One Image Restoration dataset: name: placeholder type: image metrics: - name: PSNR type: psnr value: 0.0 --- This is the official pretrained models for the paper. >**Restore Anything with Masks:Leveraging Mask Image Modeling for Blind All-in-One Image Restoration**
[Chujie Qin](https://github.com/Dragonisss), [Ruiqi Wu](https://rq-wu.github.io/), [Zikun Liu](), [Xin Lin](https://linxin0.github.io/), [Chunle Guo](https://scholar.google.com/citations?user=RZLYwR0AAAAJ&hl=en), [Hyun Hee Park](s), [Chongyi Li](https://li-chongyi.github.io/)
> ( † indicates corresponding author )
> In ECCV 2024, \[[HomePage](https://rq-wu.github.io/projects/RAM/index.html)\], \[[Paper Link](https://arxiv.org/abs/2409.19403v1)\] > **RAM++: Robust Representation Learning via Adaptive Mask for All-in-One Image Restoration**
> [Zilong Zhang*](https://github.com/Zilong-Zhang003), [Chujie Qin*](https://github.com/DragonisCV), [Chunle Guo](https://mmcheng.net/clguo/), [Yong Zhang](), [Chao Xue](), [Ming-Ming Cheng](https://mmcheng.net/cmm/), [Chongyi Li](https://li-chongyi.github.io/)
> (*indicates equal contribution; indicates corresponding author)
> arxiv preprint, \[[HomePage](https://zilong-zhang003.github.io/RAM2.0/)\], \[[Paper Link](https://arxiv.org/abs/2509.12039)\] # Model description ## RAM This method is architecture-agnostic and can be trained with any model. \ Here we provide the pre-trained and fine-tuned weights for two representative models: [PromptIR](https://github.com/va1shn9v/PromptIR) and [SwinIR](https://github.com/JingyunLiang/SwinIR). ## RAM_plus AdaSAM is a ViT-based, pixel-level mask generator. It analyzes correlations between image tokens and applies masks to regions that are semantically and texturally rich. RestormerWoSkip is built on [Restormer](https://github.com/swz30/Restormer); it differs by removing the long-range residual connections. RestormerRFR regularizes via an efficient feature-fusion strategy that leverages DINOv2’s semantic consistency and degradation invariance. Different folders contain model weights trained under configurations with different numbers of tasks. # How to use For full instructions and runnable scripts, see the [code repository](https://github.com/DragonisCV/RAM/) ## RAM ### Pre-training: ```python mask, mask_token = Random(img) #pixel_level output = PromptIR(img, mask, mask_token) ``` ### Fine-tuning: ```python output = PromptIR(img, mask=None, mask_token=None) ``` ## RAM_plus ### Pre-training: ```python mask, mask_token = AdaSAM(img) output = RestormerWoSkip(img, mask, mask_token) ``` ### Fine-tuning: ```python dino_features = DINOv2(img) output = RestormerRFR(img, mask=None, mask_token=None, dino_features) ``` # Citation If you find our repo useful for your research, please consider citing our paper: ```bibtex @inproceedings{qin2024restore, title={Restore Anything with Masks: Leveraging Mask Image Modeling for Blind All-in-One Image Restoration}, author={Qin, Chu-Jie and Wu, Rui-Qi and Liu, Zikun and Lin, Xin and Guo, Chun-Le and Park, Hyun Hee and Li, Chongyi}, booktitle={European Conference on Computer Vision}, pages={364--380}, year={2024}, organization={Springer} } @misc{zhang2025ramrobustrepresentationlearning, title={RAM++: Robust Representation Learning via Adaptive Mask for All-in-One Image Restoration}, author={Zilong Zhang and Chujie Qin and Chunle Guo and Yong Zhang and Chao Xue and Ming-Ming Cheng and Chongyi Li}, year={2025}, eprint={2509.12039}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2509.12039}, } ```