DeepSeek-R1-0528-Qwen3-8B
Model creator: deepseek-ai
Original model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
GGUF quantization: provided by olegshulyakov using llama.cpp
Special thanks
🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.
Use with Ollama
ollama run "hf.co/olegshulyakov/DeepSeek-R1-0528-Qwen3-8B-GGUF:Q5_K_XL"
Use with LM Studio
lms load "olegshulyakov/DeepSeek-R1-0528-Qwen3-8B-GGUF"
Use with llama.cpp CLI
llama-cli -hf olegshulyakov/DeepSeek-R1-0528-Qwen3-8B-GGUF:Q5_K_XL -p "The meaning to life and the universe is"
Use with llama.cpp Server:
llama-server -hf olegshulyakov/DeepSeek-R1-0528-Qwen3-8B-GGUF:Q5_K_XL -ngl 99 -c 0
- Downloads last month
- 8
Hardware compatibility
Log In
to view the estimation
5-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for olegshulyakov/DeepSeek-R1-0528-Qwen3-8B-GGUF
Base model
deepseek-ai/DeepSeek-R1-0528-Qwen3-8B