Some pointlessly bigger exllamav3 quants of Qwen3-Next-80B-A3B-Instruct to complement Turboderp's optimized quants.

6.00bpw_H6 (56.561 GiB)
8.00bpw_H8 (75.026 GiB)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for MikeRoz/Qwen3-Next-80B-A3B-Instruct-exl3

Quantized
(49)
this model