DeepSeek-V3.1-Terminus-W4AFP8

This model is a mixed-precision quantized version of DeepSeek-V3.1-Terminus, with dense layer keep the FP8 quantization of the original model, while MoE layers uses INT4 weights and FP8 activation, also called W4AFP8.

Benchmark

The accuracy below was obtained with SGLang V0.5.3 in non-thinking mode.

Model math_500 gpqa aime2024 mmlu-pro
DeepSeek-V3.1-Terminus-W4AFP8 89.83 78.28 80.0 83.66

Inference with SGLang

We have already supported deploying this model using tensor parallel in sglang for better performance. The releated PR https://github.com/sgl-project/sglang/pull/8118 has been merged in SGLang V0.5.2, so you can deploy this model using SGLang version 0.5.2 or later with tensor parallel.

python3 -m sglang.launch_server --model-path /path/to/DeepSeek-V3.1-Terminus-W4AFP8 --tp 8 --trust-remote-code --host 0.0.0.0 --port 8000
Downloads last month
148
Safetensors
Model size
349B params
Tensor type
F32
·
BF16
·
F8_E4M3
·
I8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tencent/DeepSeek-V3.1-Terminus-W4AFP8

Finetuned
(3)
this model