SDNQ
Collection
Models quantized with SDNQ
•
15 items
•
Updated
•
2
4 bit (UINT4 with SVD rank 32) quantization of Wan-AI/Wan2.2-T2V-A14B-Diffusers using SDNQ.
Usage:
pip install git+https://github.com/Disty0/sdnq
import torch
import numpy as np
import diffusers
from diffusers.utils import export_to_video, load_image
from sdnq import SDNQConfig # import sdnq to register it into diffusers and transformers
vae = diffusers.AutoencoderKLWan.from_pretrained("Disty0/Wan2.2-T2V-A14B-SDNQ-uint4-svd-r32", subfolder="vae", torch_dtype=torch.float32)
pipe = diffusers.WanPipeline.from_pretrained("Disty0/Wan2.2-T2V-A14B-SDNQ-uint4-svd-r32", vae=vae, torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()
height = 720
width = 1280
prompt = "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
negative_prompt = "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"
output = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
height=height,
width=width,
num_frames=81,
guidance_scale=4.0,
guidance_scale_2=3.0,
num_inference_steps=40,
).frames[0]
export_to_video(output, "t2v_out.mp4", fps=16)
Base model
Wan-AI/Wan2.2-T2V-A14B-Diffusers