ReasoningLlama-Math-1B-IT-gguf

Model Description

This is a fine-tuned version of unsloth/Llama-3.2-1B on the unsloth/OpenMathReasoning-miniwhich is a small version of the nvidia/OpenMathReasoning dataset which was used to win the AIMO (AI Mathematical Olympiad) challenge!

  • recommended settings for inference: min_p = 0.1 and temperature = 1.5 , Read this Tweet to understand why.
  • License : apache-2.0
  • Quantized from model : CannaeAI/ReasoningLlama-Math-1B-IT

Available Model files:

  • ReasoningLlama-Math-1B.Q5_K_M.gguf
  • ReasoningLlama-Math-1B.Q8_0.gguf
  • ReasoningLlama-Math-1B.Q4_K_M.gguf

Ollama

An Ollama Modelfile is included for easy deployment.

Downloads last month
184
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Cannae-AI/ReasoningLlama-Math-1B-IT-gguf

Dataset used to train Cannae-AI/ReasoningLlama-Math-1B-IT-gguf

Collection including Cannae-AI/ReasoningLlama-Math-1B-IT-gguf