pythia-6.9b - AWQ (4-bit)

Source model: EleutherAI/pythia-6.9b

This model was quantized to 4-bit using VLLM-Compressor.

Quantization parameters: 4-bit, symmetric scheme.

Usage

# pip install vllm
from vllm import LLM
model = LLM("iproskurina/pythia-6.9b-awq-int4")
output = model.generate("The capital of France is")
print(output)```
Downloads last month
21
Safetensors
Model size
1B params
Tensor type
I64
·
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train iproskurina/pythia-6.9b-awq-int4