RAM_plus_plus / README.md
zzl
Initial commit (clean history)
f4ada59
|
raw
history blame
4.22 kB
---
library_name: pytorch
license: other
tags:
- low-level-vision
- all-in-one image-restoration
language:
- en
pipeline_tag: image-to-image
model-index:
- name: RAM / RAM++
results:
- task:
type: image-to-image
name: All-in-One Image Restoration
dataset:
name: placeholder
type: image
metrics:
- name: PSNR
type: psnr
value: 0.0
---
This is the official pretrained models for the paper.
>**Restore Anything with Masks:Leveraging Mask Image Modeling for Blind All-in-One Image Restoration**<br> [Chujie Qin](https://github.com/Dragonisss), [Ruiqi Wu](https://rq-wu.github.io/), [Zikun Liu](), [Xin Lin](https://linxin0.github.io/), [Chunle Guo](https://scholar.google.com/citations?user=RZLYwR0AAAAJ&hl=en), [Hyun Hee Park](s), [Chongyi Li<sup>†</sup>](https://li-chongyi.github.io/)<br/>
> ( † indicates corresponding author )<br/>
> In ECCV 2024, \[[HomePage](https://rq-wu.github.io/projects/RAM/index.html)\], \[[Paper Link](https://arxiv.org/abs/2409.19403v1)\]
> **RAM++: <u>R</u>obust Representation Learning via <u>A</u>daptive <u>M</u>ask for All-in-One Image Restoration**<br>
> [Zilong Zhang<sup>*</sup>](https://github.com/Zilong-Zhang003), [Chujie Qin<sup>*</sup>](https://github.com/DragonisCV), [Chunle Guo](https://mmcheng.net/clguo/), [Yong Zhang](), [Chao Xue](), [Ming-Ming Cheng](https://mmcheng.net/cmm/), [Chongyi Li<sup>†</sup>](https://li-chongyi.github.io/)<br/>
> (<sup>*</sup>indicates equal contribution; <sup></sup> indicates corresponding author)<br/>
> arxiv preprint, \[[HomePage](https://zilong-zhang003.github.io/RAM2.0/)\], \[[Paper Link](https://arxiv.org/abs/2509.12039)\]
# Model description
## RAM
This method is architecture-agnostic and can be trained with any model. \
Here we provide the pre-trained and fine-tuned weights for two representative models: <strong>[PromptIR](https://github.com/va1shn9v/PromptIR)</strong> and <strong>[SwinIR](https://github.com/JingyunLiang/SwinIR)</strong>.
## RAM_plus
<strong>AdaSAM</strong> is a ViT-based, pixel-level mask generator. It analyzes correlations between image tokens and applies masks to regions that are semantically and texturally rich.
<strong>RestormerWoSkip</strong> is built on <strong>[Restormer](https://github.com/swz30/Restormer)</strong>; it differs by removing the long-range residual connections.
<strong>RestormerRFR</strong> regularizes via an efficient feature-fusion strategy that leverages DINOv2’s semantic consistency and degradation invariance.
<strong>Different folders</strong> contain model weights trained under configurations with different numbers of tasks.
# How to use
For full instructions and runnable scripts, see the [code repository](https://github.com/DragonisCV/RAM/)
## RAM
### Pre-training:
```python
mask, mask_token = Random(img) #pixel_level
output = PromptIR(img, mask, mask_token)
```
### Fine-tuning:
```python
output = PromptIR(img, mask=None, mask_token=None)
```
## RAM_plus
### Pre-training:
```python
mask, mask_token = AdaSAM(img)
output = RestormerWoSkip(img, mask, mask_token)
```
### Fine-tuning:
```python
dino_features = DINOv2(img)
output = RestormerRFR(img, mask=None, mask_token=None, dino_features)
```
# Citation
If you find our repo useful for your research, please consider citing our paper:
```bibtex
@inproceedings{qin2024restore,
title={Restore Anything with Masks: Leveraging Mask Image Modeling for Blind All-in-One Image Restoration},
author={Qin, Chu-Jie and Wu, Rui-Qi and Liu, Zikun and Lin, Xin and Guo, Chun-Le and Park, Hyun Hee and Li, Chongyi},
booktitle={European Conference on Computer Vision},
pages={364--380},
year={2024},
organization={Springer}
}
@misc{zhang2025ramrobustrepresentationlearning,
title={RAM++: Robust Representation Learning via Adaptive Mask for All-in-One Image Restoration},
author={Zilong Zhang and Chujie Qin and Chunle Guo and Yong Zhang and Chao Xue and Ming-Ming Cheng and Chongyi Li},
year={2025},
eprint={2509.12039},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.12039},
}
```