Deepseek MoE BF16 for ik_llama quants
Collection
Upscaled BF16 quants and imatrix for Deepseek models. GGUFs are moving to Modelscope, Imatrix and Readme.md remain.
โข
6 items
โข
Updated
Imatrix: DeepSeek-R1-OG.imatrix
GGUF files: Moved to ModelScope (see below)
IQ2_KS quant from lmganon123 See here for an IQ2_KS quant from lmganon123: lmganon123/DeepSeek-R1_IK_GGUF_Q2
Due to new storage limits introduced by HuggingFace, the GGUF files (30 ร 46GB = ~1.38TB) have been moved to ModelScope.
pip install modelscope
from modelscope import snapshot_download
model_dir = snapshot_download('quantzor/DeepSeek-R1-OG-256x21B-BF16')
๐ https://modelscope.cn/models/quantzor/DeepSeek-R1-OG-256x21B-BF16
Base model
deepseek-ai/DeepSeek-R1