Quark Quantized MXFP4 models
					Collection
				
				27 items
				• 
				Updated
					
				
This model was built with Meta Llama by applying AMD-Quark for MXFP4 quantization.
The model was quantized from meta-llama/Llama-3.1-405B-Instruct using AMD-Quark. Weights and activations were quantized to MXFP4, and KV caches were quantized to FP8. The AutoSmoothQuant algorithm was applied to enhance accuracy during quantization.
Quantization scripts:
cd Quark/examples/torch/language_modeling/llm_ptq/
python3 quantize_quark.py --model_dir "meta-llama/Llama-3.1-405B-Instruct" \
                          --model_attn_implementation "sdpa" \
                          --quant_scheme w_mxfp4_a_mxfp4 \
                          --group_size 32 \
                          --kv_cache_dtype fp8 \
                          --quant_algo autosmoothquant \
                          --min_kv_scale 1.0 \
                          --model_export hf_format \
                          --output_dir amd/Llama-3.1-405B-Instruct-MXFP4 \
                          --multi_gpu
This model can be deployed efficiently using the vLLM backend.
The model was evaluated on MMLU, GSM8K_COT, ARC Challenge and IFEVAL. Evaluation was conducted using the framework lm-evaluation-harness and the vLLM engine.
| Benchmark | Llama-3.1-405B-Instruct | Llama-3.1-405B-Instruct-MXFP4(this model) | Recovery | 
| MMLU (5-shot) | 87.63 | 86.68 | 98.92% | 
| GSM8K_COT (8-shot, strict-match) | 96.51 | 96.13 | 99.61% | 
| ARC Challenge (0-shot) | 96.65 | 96.39 | 99.73% | 
| IFEVAL (0-shot, (inst_level_strict_acc+prompt_level_strict_acc)/2) | 88.52 | 87.00 | 98.28% | 
The results were obtained using the following commands:
lm_eval \
    --model vllm \
    --model_args pretrained="amd/Llama-3.1-405B-Instruct-MXFP4-Preview",gpu_memory_utilization=0.85,tensor_parallel_size=8,kv_cache_dtype='fp8' \
    --tasks mmlu_llama \
    --fewshot_as_multiturn \
    --apply_chat_template \
    --num_fewshot 5 \
    --batch_size auto
lm_eval \
    --model vllm \
    --model_args pretrained="amd/Llama-3.1-405B-Instruct-MXFP4-Preview",gpu_memory_utilization=0.85,tensor_parallel_size=8,kv_cache_dtype='fp8' \
    --tasks gsm8k_llama \
    --fewshot_as_multiturn \
    --apply_chat_template \
    --num_fewshot 8 \
    --batch_size auto
lm_eval \
    --model vllm \
    --model_args pretrained="amd/Llama-3.1-405B-Instruct-MXFP4-Preview",gpu_memory_utilization=0.85,tensor_parallel_size=8,kv_cache_dtype='fp8' \
    --tasks arc_challenge_llama \
    --fewshot_as_multiturn \
    --apply_chat_template \
    --num_fewshot 0 \
    --batch_size auto
lm_eval \
    --model vllm \
    --model_args pretrained="amd/Llama-3.1-405B-Instruct-MXFP4-Preview",gpu_memory_utilization=0.85,tensor_parallel_size=8,kv_cache_dtype='fp8' \
    --tasks ifeval \
    --fewshot_as_multiturn \
    --apply_chat_template \
    --num_fewshot 0 \
    --batch_size auto
Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.
Base model
meta-llama/Llama-3.1-405B