BgGPT-Gemma-2
					Collection
				
BgGPT-Gemma-2 collection of models.
					• 
				6 items
				• 
				Updated
					
				•
					
					14
BgGPT is distributed under Gemma Terms of Use.
This repo contains the GGUF format model files for INSAIT-Institute/BgGPT-Gemma-2-27B-IT-v1.0.
Install the required package:
pip install llama-cpp-python
Example chat completion:
from llama_cpp import Llama
llm = Llama(
    model_path="path/to/your/model.gguf",
    n_ctx=8192,
    penalize_nl=False
)
messages = [{"role": "user", "content": "Кога е основан Софийският университет?"}]
response = llm.create_chat_completion(
    messages=messages,
    max_tokens=2048,        # Choose maximum generated tokens
    temperature=0.1,
    top_p=0.9,
    repeat_penalty=1.0,
    stop=["<eos>", "<end_of_turn>"]
)
Example normal completion:
from llama_cpp import Llama
llm = Llama(
    model_path="path/to/your/model.gguf",
    n_ctx=8192,
    penalize_nl=False
)
prompt = "<start_of_turn>user\nКога е основан Софийският университет?<end_of_turn>\n<start_of_turn>model\n"
response = llm(
    prompt,
    max_tokens=2048,        # Choose maximum generated tokens
    temperature=0.1,
    top_p=0.9,
    repeat_penalty=1.0,
    stop=["<eos>","<end_of_turn>"]
)
4-bit
5-bit
6-bit
8-bit
16-bit