Downloads last month
316
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ngxson/DeepSeek-R1-Distill-Qwen-7B-abliterated-GGUF

Quantized
(162)
this model

Space using ngxson/DeepSeek-R1-Distill-Qwen-7B-abliterated-GGUF 1