This is Q8_0 quantization model of Llava1.6.

Run it by llama_cpp

# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
    repo_id="Steven0090/llava1.6-Mistral-7B-Instruct-v0.2-gguf",
    filename="Mistral-7B-Instruct-v0.2-Q8_0.gguf",
)
Downloads last month
216
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Steven0090/llava1.6-Mistral-7B-Instruct-v0.2-gguf

Quantized
(1)
this model