Needed to run a 4-bit quantization on vLLM but only GGUFs were available.

Loading time went from ~9 minutes to 2.5 minutes. Throughput went from 25 tokens/second to 45 tokens/second.

Downloads last month
37
Safetensors
Model size
11B params
Tensor type
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Ewere/DeepSeek-R1-Distill-Llama-70B-abliterated-AWQ