Model Overview

  • Model Architecture: Kimi-K2-Instruct
    • Input: Text
    • Output: Text
  • Supported Hardware Microarchitecture: AMD MI350/MI355
  • ROCm: 7.0
  • Operating System(s): Linux
  • Inference Engine: vLLM
  • Model Optimizer: AMD-Quark
    • Weight quantization: MOE-only, OCP MXFP4, Static
    • Activation quantization: MOE-only, OCP MXFP4, Dynamic
  • Calibration Dataset: Pile

This model was built with Kimi-K2-Instruct model by applying AMD-Quark for MXFP4 quantization.

Model Quantization

The model was quantized from unsloth/Kimi-K2-Instruct-0905-BF16 using AMD-Quark. The weights and activations are quantized to MXFP4.

Deployment

Use with vLLM

This model can be deployed efficiently using the vLLM backend.

Evaluation

The model was evaluated on GSM8K benchmarks.

Accuracy

Benchmark Kimi-K2-Instruct-0905 Kimi-K2-Instruct-0905-MXFP4(this model) Recovery
GSM8K (strict-match) 95.53 94.47 98.89%

Reproduction

The GSM8K results were obtained using the lm-evaluation-harness framework, based on the Docker image rocm/vllm-private:vllm_dev_base_mxfp4_20260122, with vLLM and lm-eval compiled and installed from source inside the image.

Launching server

export VLLM_ATTENTION_BACKEND="TRITON_MLA"
export VLLM_ROCM_USE_AITER=1
export VLLM_ROCM_USE_AITER_FUSION_SHARED_EXPERTS=0

vllm serve amd/Kimi-K2-Instruct-0905-MXFP4 \
  --port 8000 \
  --served-model-name kimi-k2-mxfp4 \
  --trust-remote-code \
  --tensor-parallel-size 8 \
  --enable-auto-tool-choice \
  --tool-call-parser kimi_k2

Evaluating model in a new terminal

lm_eval \
  --model local-completions \
  --model_args "model=kimi-k2-mxfp4,base_url=http://0.0.0.0:8000/v1/completions,tokenized_requests=False,tokenizer_backend=None,num_concurrent=32" \
  --tasks gsm8k \
  --num_fewshot 5 \
  --batch_size 1

License

Modifications Copyright(c) 2025 Advanced Micro Devices, Inc. All rights reserved.

Downloads last month
14
Safetensors
Model size
551B params
Tensor type
BF16
F32
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for amd/Kimi-K2-Instruct-0905-MXFP4

Quantized
(22)
this model