Files changed (1) hide show
  1. README.md +0 -89
README.md DELETED
@@ -1,89 +0,0 @@
1
- ---
2
- base_model: Qwen/Qwen-Image-Edit
3
- base_model_relation: quantized
4
- datasets:
5
- - mit-han-lab/svdquant-datasets
6
- language:
7
- - en
8
- library_name: diffusers
9
- license: apache-2.0
10
- pipeline_tag: text-to-image
11
- tags:
12
- - image-editing
13
- - SVDQuant
14
- - Qwen-Image-Edit
15
- - Diffusion
16
- - Quantization
17
- - ICLR2025
18
-
19
- ---
20
- <p align="center" style="border-radius: 10px">
21
- <img src="https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/nunchaku_v2.png" width="30%" alt="Nunchaku Logo"/>
22
- </p>
23
-
24
- <div align="center">
25
- <a href=https://discord.gg/Wk6PnwX9Sm target="_blank"><img src=https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2FWk6PnwX9Sm%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&logo=discord&logoColor=white&label=Discord&color=green&suffix=%20total height=22px></a>
26
- <a href=https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/wechat.jpg target="_blank"><img src=https://img.shields.io/badge/WeChat-07C160?logo=wechat&logoColor=white height=22px></a>
27
- </div>
28
-
29
- # Model Card for nunchaku-qwen-image-edit
30
-
31
- ![comfyui](https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/ComfyUI-nunchaku/workflows/nunchaku-qwen-image-edit.png)
32
- This repository contains Nunchaku-quantized versions of [Qwen-Image-Edit](https://huggingface.co/Qwen/Qwen-Image-Edit), an image-editing model based on [Qwen-Image](https://huggingface.co/Qwen/Qwen-Image), advances in complex text rendering. It is optimized for efficient inference while maintaining minimal loss in performance.
33
-
34
-
35
-
36
- No recent news. Stay tuned for updates!
37
-
38
- ## Model Details
39
-
40
- ### Model Description
41
-
42
- - **Developed by:** Nunchaku Team
43
- - **Model type:** image-to-image
44
- - **License:** apache-2.0
45
- - **Quantized from model:** [Qwen-Image-Edit](https://huggingface.co/Qwen/Qwen-Image-Edit)
46
-
47
- ### Model Files
48
-
49
- - [`svdq-int4_r32-qwen-image-edit.safetensors`](./svdq-int4_r32-qwen-image-edit.safetensors): SVDQuant INT4 (rank 32) Qwen-Image-Edit model. For users with non-Blackwell GPUs (pre-50-series).
50
- - [`svdq-int4_r128-qwen-image-edit.safetensors`](./svdq-int4_r128-qwen-image-edit.safetensors): SVDQuant INT4 (rank 128) Qwen-Image-Edit model. For users with non-Blackwell GPUs (pre-50-series). It offers better quality than the rank 32 model, but it is slower.
51
- - [`svdq-int4_r32-qwen-image-edit-lightningv1.0-4steps.safetensors`](./svdq-int4_r32-qwen-image-edit-lightningv1.0-4steps.safetensors): SVDQuant INT4 (rank 32) 4-step Qwen-Image-Edit model by fusing [Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors) using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series).
52
- - [`svdq-int4_r128-qwen-image-edit-lightningv1.0-4steps.safetensors`](./svdq-int4_r128-qwen-image-edit-lightningv1.0-4steps.safetensors): SVDQuant INT4 (rank 128) 4-step Qwen-Image-Edit model by fusing [Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors) using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series).
53
- - [`svdq-int4_r32-qwen-image-edit-lightningv1.0-8steps.safetensors`](./svdq-int4_r32-qwen-image-edit-lightningv1.0-8steps.safetensors): SVDQuant INT4 (rank 32) 8-step Qwen-Image-Edit model by fusing [Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors) using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series).
54
- - [`svdq-int4_r128-qwen-image-edit-lightningv1.0-8steps.safetensors`](./svdq-int4_r128-qwen-image-edit-lightningv1.0-8steps.safetensors): SVDQuant INT4 (rank 128) 8-step Qwen-Image-Edit model by fusing [Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors) using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series).
55
- - [`svdq-fp4_r32-qwen-image-edit.safetensors`](./svdq-fp4_r32-qwen-image-edit.safetensors): SVDQuant NVFP4 (rank 32) Qwen-Image-Edit model. For users with Blackwell GPUs (50-series).
56
- - [`svdq-fp4_r128-qwen-image-edit.safetensors`](./svdq-fp4_r128-qwen-image-edit.safetensors): SVDQuant NVFP4 (rank 128) Qwen-Image-Edit model. For users with Blackwell GPUs (50-series). It offers better quality than the rank 32 model, but it is slower.
57
- - [`svdq-fp4_r32-qwen-image-edit-lightningv1.0-4steps.safetensors`](./svdq-fp4_r32-qwen-image-edit-lightningv1.0-4steps.safetensors): SVDQuant NVFP4 (rank 32) 4-step Qwen-Image-Edit model by fusing [Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors) using LoRA strength = 1.0. For users with Blackwell GPUs (50-series).
58
- - [`svdq-fp4_r128-qwen-image-edit-lightningv1.0-4steps.safetensors`](./svdq-fp4_r128-qwen-image-edit-lightningv1.0-4steps.safetensors): SVDQuant NVFP4 (rank 128) 4-step Qwen-Image-Edit model by fusing [Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors) using LoRA strength = 1.0. For users with Blackwell GPUs (50-series).
59
- - [`svdq-fp4_r32-qwen-image-edit-lightningv1.0-8steps.safetensors`](./svdq-fp4_r32-qwen-image-edit-lightningv1.0-8steps.safetensors): SVDQuant NVFP4 (rank 32) 8-step Qwen-Image-Edit model by fusing [Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors) using LoRA strength = 1.0. For users with Blackwell GPUs (50-series).
60
- - [`svdq-fp4_r128-qwen-image-edit-lightningv1.0-8steps.safetensors`](./svdq-fp4_r128-qwen-image-edit-lightningv1.0-8steps.safetensors): SVDQuant NVFP4 (rank 128) 8-step Qwen-Image-Edit model by fusing [Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Edit-Lightning-8steps-V1.0-bf16.safetensors) using LoRA strength = 1.0. For users with Blackwell GPUs (50-series).
61
-
62
-
63
- ### Model Sources
64
-
65
- - **Inference Engine:** [nunchaku](https://github.com/nunchaku-tech/nunchaku)
66
- - **Quantization Library:** [deepcompressor](https://github.com/nunchaku-tech/deepcompressor)
67
- - **Paper:** [SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models](http://arxiv.org/abs/2411.05007)
68
- - **Demo:** [svdquant.mit.edu](https://svdquant.mit.edu)
69
-
70
- ## Usage
71
-
72
- - Diffusers Usage: See [qwen-image-edit.py](https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit.py) and [qwen-image-edit-lightning.py](https://github.com/nunchaku-tech/nunchaku/blob/main/examples/v1/qwen-image-edit-lightning.py). Check this [tutorial](https://nunchaku.tech/docs/nunchaku/usage/qwen-image-edit.html) for more advanced usage.
73
- - ComfyUI Usage: See [nunchaku-qwen-image-edit.json](https://nunchaku.tech/docs/ComfyUI-nunchaku/workflows/qwenimage.html#nunchaku-qwen-image-edit-json).
74
-
75
- ## Performance
76
-
77
- ![performance](https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/efficiency.jpg)
78
-
79
- ## Citation
80
-
81
- ```bibtex
82
- @inproceedings{
83
- li2024svdquant,
84
- title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
85
- author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
86
- booktitle={The Thirteenth International Conference on Learning Representations},
87
- year={2025}
88
- }
89
- ```