Update README.md
Browse files
README.md
CHANGED
|
@@ -15,6 +15,14 @@ Please follow the license of the original model.
|
|
| 15 |
|
| 16 |
## How To Use
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
**INT4 Inference on CPU/Intel GPU/CUDA**
|
| 19 |
|
| 20 |
~~~python
|
|
|
|
| 15 |
|
| 16 |
## How To Use
|
| 17 |
|
| 18 |
+
|
| 19 |
+
**vLLM usage**
|
| 20 |
+
|
| 21 |
+
~~~bash
|
| 22 |
+
vllm serve Intel/Qwen3-235B-A22B-Thinking-2507-int4-AutoRound --tensor-parallel-size 4 --max-model-len 32768
|
| 23 |
+
~~~
|
| 24 |
+
|
| 25 |
+
|
| 26 |
**INT4 Inference on CPU/Intel GPU/CUDA**
|
| 27 |
|
| 28 |
~~~python
|