Meta-Llama-3.1-Quantized
					Collection
				
Collection of quantized Llama 3.1 models (8B & 70B versions for now), using bitsandbites.
					• 
				4 items
				• 
				Updated
					
				•
					
					1
This is a quantized version of Llama 3.1 70B Instruct. Quantized to 4-bit using bistandbytes and accelerate.
Use a pipeline as a high-level helper:
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="fsaudm/Meta-Llama-3.1-70B-Instruct-NF4")
pipe(messages)
Load model directly
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("fsaudm/Meta-Llama-3.1-70B-Instruct-NF4")
model = AutoModelForCausalLM.from_pretrained("fsaudm/Meta-Llama-3.1-70B-Instruct-NF4")
The base model information can be found in the original meta-llama/Meta-Llama-3.1-70B-Instruct
Base model
meta-llama/Llama-3.1-70B