File size: 1,962 Bytes
c710879 1479e95 c710879 2d68772 c710879 f53ce69 c78c3c0 c33668f c78c3c0 e71a1df c710879 6feada0 c710879 f53ce69 c710879 cd51572 c710879 7612693 c710879 994c791 5d5c628 994c791 c710879 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: apache-2.0
base_model:
- Qwen/Qwen-Image-Edit
language:
- en
- zh
library_name: diffusers
pipeline_tag: image-to-image
datasets:
- OPPOer/X2Edit-Dataset
---
<div align="center">
<h1>Qwen-Image-Edit-Pruning</h1>
<a href='https://github.com/OPPO-Mente-Lab/Qwen-Image-Pruning'><img src="https://img.shields.io/badge/GitHub-OPPOer-blue.svg?logo=github" alt="GitHub"></a>
</div>
## Update
- 2025/10/09: We release **[Qwen-Image-Edit-2509-Pruning-13B-4steps](https://huggingface.co/OPPOer/Qwen-Image-Edit-2509-Pruning)**
- 2025/09/29: We release **[Qwen-Image-Edit-2509-Pruning-14B](https://huggingface.co/OPPOer/Qwen-Image-Edit-2509-Pruning)**
- 2025/09/28: We release **[Qwen-Image-Edit-Pruning-13B-4steps](https://huggingface.co/OPPOer/Qwen-Image-Edit-Pruning)**
## Introduction
This open-source project is based on Qwen-Image-Edit and has attempted model pruning, removing 20 layers while retaining the weights of 40 layers, resulting in a model size of 13.6B parameters. The pruned version will continue to be iterated upon. Please stay tuned.
<div align="center">
<img src="bench.png">
</div>
## Quick Start
Install the latest version of diffusers and pytorch
```
pip install torch
pip install git+https://github.com/huggingface/diffusers
```
### Qwen-Image-Edit-13B Inference
```python
from diffusers import QwenImageEditPipeline
import os
from PIL import Image
import time
import torch
model_name = "OPPOer/Qwen-Image-Edit-Pruning"
pipe = QwenImageEditPipeline.from_pretrained(model_name, torch_dtype=torch.bfloat16)
pipe = pipe.to('cuda')
subject_img = Image.open('input.jpg').convert('RGB')
prompt = '改为数字插画风格'
t1 = time.time()
inputs = {
"image": subject_img,
"prompt": prompt,
"generator": torch.manual_seed(42),
"true_cfg_scale": 1,
"num_inference_steps": 4,
}
with torch.inference_mode():
output = pipe(**inputs)
output_image = output.images[0]
output_image.save('output.jpg')
``` |