Update evaluate results
Browse files
README.md
CHANGED
|
@@ -19,7 +19,7 @@ Please follow the license of the original model.
|
|
| 19 |
**vLLM usage**
|
| 20 |
|
| 21 |
~~~bash
|
| 22 |
-
vllm serve Intel/Qwen3-30B-A3B-Thinking-2507-int4-AutoRound --tensor-parallel-size 4 --max-model-len 32768
|
| 23 |
~~~
|
| 24 |
|
| 25 |
**INT4 Inference on CPU/Intel GPU/CUDA**
|
|
@@ -80,6 +80,21 @@ Here is the sample command to reproduce the model
|
|
| 80 |
auto-round --model Qwen/Qwen3-30B-A3B-Thinking-2507 --output_dir "./tmp_autoround" --enable_torch_compile --nsamples 512 --fp_layers mlp.gate
|
| 81 |
~~~
|
| 82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
## Ethical Considerations and Limitations
|
| 84 |
|
| 85 |
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|
|
|
|
| 19 |
**vLLM usage**
|
| 20 |
|
| 21 |
~~~bash
|
| 22 |
+
vllm serve Intel/Qwen3-30B-A3B-Thinking-2507-int4-AutoRound --tensor-parallel-size 4 --max-model-len 32768 --enable-expert-parallel
|
| 23 |
~~~
|
| 24 |
|
| 25 |
**INT4 Inference on CPU/Intel GPU/CUDA**
|
|
|
|
| 80 |
auto-round --model Qwen/Qwen3-30B-A3B-Thinking-2507 --output_dir "./tmp_autoround" --enable_torch_compile --nsamples 512 --fp_layers mlp.gate
|
| 81 |
~~~
|
| 82 |
|
| 83 |
+
## Evaluate Results
|
| 84 |
+
|
| 85 |
+
| benchmark | backend | Intel/Qwen3-30B-A3B-Thinking-2507-int4-AutoRound | Qwen/Qwen3-30B-A3B-Thinking-2507 |
|
| 86 |
+
| :-------: | :-----: | :----------------------------------------------: | :------------------------------: |
|
| 87 |
+
| mmlu_pro | vllm | 0.6956 | 0.7144 |
|
| 88 |
+
|
| 89 |
+
```
|
| 90 |
+
# key dependency version
|
| 91 |
+
torch 2.8.0
|
| 92 |
+
transformers 4.56.1
|
| 93 |
+
lm_eval 0.4.9.1
|
| 94 |
+
vllm 0.10.2rc3.dev106+g31bb760eb.precompiled
|
| 95 |
+
# vllm need to apply this pr https://github.com/vllm-project/vllm/pull/24818
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
## Ethical Considerations and Limitations
|
| 99 |
|
| 100 |
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|