Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
MidnightPhreaker
/
KAT-Dev-72B-Exp-GPTQ-INT4-gs32-0.01
like
0
Safetensors
qwen2
gptq
quantized
vllm
4bit
group_size_32
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
main
KAT-Dev-72B-Exp-GPTQ-INT4-gs32-0.01
/
model-00003-of-00009.safetensors
Commit History
Upload GPTQ quantized model (group_size=32)
0f4738f
verified
Shane
commited on
Oct 22