This is a MXFP4_MOE quantization of the model DeepSeek-V3.1-Terminus

Model quantized with BF16 GGUF's from: https://huggingface.co/unsloth/DeepSeek-V3.1-Terminus-GGUF

Original model: https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Terminus

Downloads last month
303
GGUF
Model size
671B params
Architecture
deepseek2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noctrex/DeepSeek-V3.1-Terminus-MXFP4_MOE-GGUF

Quantized
(18)
this model