Error with quantized models using the "Load Diffusion Model" node in ComfyUI

#1
by xuanwoa - opened

Thank you for your excellent work on the wan2.2 model. The standard version of the model works perfectly in my workflow.

I am encountering an issue specifically when trying to load your quantized models (fp8_e4m3 and int8) in ComfyUI. My workflow uses the "Load Diffusion Model" node, but it fails when I select either of the quantized model files.

The console displays the following error message:

unet unexpected: ['blocks.0.self_attn.q.weight_scale', 'blocks.0.self_attn.k.weight_scale', 'blocks.0.self_attn.v.weight_scale', 'blocks.0.self_attn.o.weight_scale', 'blocks.0.cross_attn.q.weight_scale', 'blocks.0.cross_attn.k.weight_scale', ... and many other similar 'weight_scale' keys]
This error suggests that the underlying loading mechanism used by the "Load Diffusion Model" node cannot handle the quantization parameters (weight_scale) present in the model.

Is there a specific custom node or a different loading method required to properly use these quantized models with a node like "Load Diffusion Model"?

Any guidance on how to correctly integrate these models would be very helpful.

xuanwoa changed discussion title from Question about using the new wan2.2 INT8 model to Error with quantized models using the "Load Diffusion Model" node in ComfyUI

Try Wan2.2-Distill-Models/wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors and Wan2.2-Distill-Models/wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors

Try Wan2.2-Distill-Models/wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors and Wan2.2-Distill-Models/wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors

I get the same issue as OP with the comfyui.safetensors

Sign up or log in to comment