Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
MidnightPhreaker
/
KAT-Dev-72B-Exp-GPTQ-INT4-gs32-0.01
like
0
Safetensors
qwen2
gptq
quantized
vllm
4bit
group_size_32
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
main
KAT-Dev-72B-Exp-GPTQ-INT4-gs32-0.01
/
merges.txt
Shane
Upload GPTQ quantized model (group_size=32)
0f4738f
verified
22 days ago
raw
Copy download link
history
contribute
delete
Safe
1.67 MB
File too large to display, you can
check the raw version
instead.