This is a gguf version of Apertus-8B

To use it, please :

  1. clone the llama.cpp repository: https://github.com/ggml-org/llama.cpp
  2. add Apertus-8B-Instruct.jinja inside the folder llama.cpp\models\templates
  3. go to llama.cpp\build\bin
  4. run the following command: llama-cli -m --chat-template-file ../../models/templates/Apertus-8B-Instruct.jinja -i --color -n 512 -c 4096 --jinja

I used WSL to have it running.

Downloads last month
102
GGUF
Model size
8B params
Architecture
apertus
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support