This is the official QAT FP-Quant checkpoint of `meta-llama/Llama-3.1-8B-Instruct`, produced as described in the [**"Bridging the Gap Between Promise and Performance for Microscaling FP4 Quantization"**](https://arxiv.org/abs/2509.23202) paper. This model can be run on Blackwell-generation NVIDIA GPUs via [QuTLASS](https://github.com/IST-DASLab/qutlass) and [FP-Quant](https://github.com/IST-DASLab/FP-Quant) in either [transformers](https://huggingface.co/docs/transformers/main/en/quantization/fp_quant) or [vLLM](https://github.com/vllm-project/vllm/pull/24440). The approximate recipe for training this model (up to local batch size and LR) is available [here](https://github.com/IST-DASLab/nanochat-qat/blob/qat/transformers_distill.py). This checkpoint has the following performance relative to the original model and the RTN quantization: | Model | MMLU | GSM8k | Hellaswag | Winogrande | Avg | |-------|------|-------|-----------|------------|-----| | `meta-llama/Llama-3.1-8B-Instruct` | 72.8 | 85.1 | 80.0 | 77.9 | 78.9 | | RTN | 67.0 | 77.4 | 77.3 | 74.4 | 74.0 | | QAT (THIS) | 68.9 | 81.6 | 79.0 | 75.1 | 76.1 |