Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Granther
/
Gemma-2-9B-Instruct-4Bit-GPTQ
like
3
Text Generation
Transformers
Safetensors
gemma2
gptq
conversational
text-generation-inference
4-bit precision
License:
gemma
Model card
Files
Files and versions
xet
Community
Train
Deploy
Use this model
Gemma-2-9B-Instruct-4Bit-GPTQ
Quantization
Metrics
Gemma-2-9B-Instruct-4Bit-GPTQ
Original Model:
gemma-2-9b-it
Model Creator:
google
Quantization
This model was quantized with the Auto-GPTQ library
Metrics
Benchmark
Metric
Gemma 2 GPTQ
Gemma 2 9B it
PIQA
0-shot
80.52
80.79
MMLU
5-shot
52.0
50.00
Downloads last month
18
Safetensors
Model size
2B params
Tensor type
I32
·
F16
·
Chat template
Files info
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
Granther/Gemma-2-9B-Instruct-4Bit-GPTQ
Base model
google/gemma-2-9b
Finetuned
google/gemma-2-9b-it
Quantized
(
150
)
this model