GLM-4.6V-Flash-nvfp4

Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: zai-org/GLM-4.6V-Flash
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration (256 samples at 4096 max length) with Rombo-Org/Optimized_Reasoning.

Notes: Keep lm_head in high precision; calibrate on long, domain-relevant sequences.

Check the original model card for information about this model.

Running the model with VLLM in Docker

The current latest and nightly builds of the VLLM docker image are picking up a version of transformers too old to run GLM-4.6V Flash. To remedy this a lightweight docker container can be built locally that will allow it to run:

Create a file named Dockerfile containing:

FROM vllm/vllm-openai:nightly
RUN pip install -U --pre "transformers>=5.0.0rc0"

Build the new container locally:

sudo docker build -t vllm-glm46v-t5 .

Once the container is built locally, the model can be run as follows:

sudo docker run --runtime nvidia --gpus all --ipc=host -p 8000:8000 --rm vllm-glm46v-t5 Firworks/GLM-4.6V-Flash-nvfp4 --dtype auto --max-model-len 32768

Incidentally, these instructions should also work to run the official unquantized vrsion of GLM-4.6V-Flash. Just swap out the model name in the final docker run command.

This was tested on an RTX Pro 6000 Blackwell cloud instance.

If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.

Downloads last month
502
Safetensors
Model size
6B params
Tensor type
F32
·
BF16
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Firworks/GLM-4.6V-Flash-nvfp4

Quantized
(30)
this model

Dataset used to train Firworks/GLM-4.6V-Flash-nvfp4