gghfez/DeepSeek-R1-OG-256x21B-BF16

Files Available

Imatrix: DeepSeek-R1-OG.imatrix

GGUF files: Moved to ModelScope (see below)

IQ2_KS quant from lmganon123 See here for an IQ2_KS quant from lmganon123: lmganon123/DeepSeek-R1_IK_GGUF_Q2

Why ModelScope?

Due to new storage limits introduced by HuggingFace, the GGUF files (30 ร— 46GB = ~1.38TB) have been moved to ModelScope.

Download

Python SDK

pip install modelscope
from modelscope import snapshot_download
model_dir = snapshot_download('quantzor/DeepSeek-R1-OG-256x21B-BF16')

Direct Link

๐Ÿ”— https://modelscope.cn/models/quantzor/DeepSeek-R1-OG-256x21B-BF16

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for gghfez/DeepSeek-R1-OG-256x21B-BF16

Finetuned
(313)
this model

Collection including gghfez/DeepSeek-R1-OG-256x21B-BF16