Deepseek MoE BF16 for ik_llama quants
Collection
Upscaled BF16 quants and imatrix for Deepseek models. GGUFs are moving to Modelscope, Imatrix and Readme.md remain. • 6 items • Updated
Imatrix: imatrix-DeepSeek-R1-0528.dat
GGUF files: Moved to ModelScope (see below)
Due to new storage limits introduced by HuggingFace, the GGUF files (30 × 46GB = ~1.38TB) have been moved to ModelScope.
pip install modelscope
from modelscope import snapshot_download
model_dir = snapshot_download('quantzor/DeepSeek-R1-0528-256x21B-BF16')
🔗 https://modelscope.cn/models/quantzor/DeepSeek-R1-0528-256x21B-BF16
Base model
deepseek-ai/DeepSeek-R1-0528