--- license: apache-2.0 --- This repository contains 3 versions of Qwen3-30B-A3B quantized with `IQ4_KS` (4.25 bpw quantization). The interesting part is that these models achieve a lower peplexity on `wiki.test.raw` than the original `bf16` model. This is surprising, considering that no QAT has been mentioned in the [Qwen3 announcement](https://qwenlm.github.io/blog/qwen3/). Hence I'm putting them out there for anyone interested in evaluating performance by means other than PPL, or just using for local inferrence. For more details see [this discussion](https://github.com/ikawrakow/ik_llama.cpp/discussions/359). **Note**: These models will only work with [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) as the `IQ4_KS` quantization type is not available in mainline `llama.cpp`. The only difference between the 3 models is the imatrix used: * **Qwen3-30B-A3B-IQ4_KS-IK-Imatrix.gguf**: imatrix computed using 500,000 tokens from `wiki.train.raw` * **Qwen3-30B-A3B-IQ4_KS-Bartowski-Imatrix.gguf**: Bartowski's [imatrix](https://huggingface.co/bartowski/Qwen_Qwen3-30B-A3B-GGUF/blob/main/Qwen_Qwen3-30B-A3B.imatrix) * **Qwen3-30B-A3B-IQ4_KS-Unsloth-Imatrix.gguf**: Unsloth [imatrix](https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF/blob/main/imatrix_unsloth.dat)