We quantified mistralai/Mistral-Small-24B-Instruct-2501 to 4bit model using BitsAndBytes.
To use this model you need install BitsAndBytes at first:
pip install -U bitsandbytes
Then, use AutoModelForCausalLM:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("minicreeper/Mistral-Small-24B-Instruct-2501-bnb-4bit")
- Downloads last month
- 2
Model tree for minicreeper/Mistral-Small-24B-Instruct-2501-bnb-4bit
Base model
mistralai/Mistral-Small-24B-Base-2501