π§ gemma-3-12b-it-Ko-Reasoning
A large-scale Korean reasoning model fine-tuned from google/gemma-3-12b-it, designed to excel in logical and multi-hop reasoning tasks in Korean.
π Overview
gemma-3-12b-it-Ko-Reasoning is a fine-tuned version of google/gemma-3-12b-it, specifically optimized for logical reasoning in Korean. This model is part of a broader research initiative to explore:
- The transition from multilingual reasoning LLMs to Korean-specialized reasoning models
- The enhancement of non-reasoning Korean language models into reasoning-capable variants
- The development of open-access models that rival proprietary alternatives in complex reasoning tasks
This model was fine-tuned using a large-scale Korean-English instruction dataset containing diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps.
π§ͺ Benchmark Results
- π All benchmarks were measured using the 0-shot CoT (Chain-of-Thought) method.
- π The Score represents either the accuracy (%) of correct answers or a rating on a 1-10 scale from a judge model.
- π LLM-as-a-judge benchmarks were evaluated using GPT-4o (2024-08-01-preview).
| Benchmark | Score | 
|---|---|
| GPQA diamond | 61.3 | 
| GSM8K | 59.6 | 
| HAERAE | 73.9 | 
| KSM | 66.7 | 
| LogicKor | 8.56 | 
| Math500 | 77.8 | 
| MT-Bench | 8.54 | 
| MT-Bench(Ko) | 8.80 | 
π§βπ» Usage
Install Transformers >= 4.50:
pip install -U transformers
Basic example:
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "DimensionSTP/gemma-3-12b-it-Ko-Reasoning"
model = Gemma3ForConditionalGeneration.from_pretrained(
    model_id, device_map="auto"
).eval()
processor = AutoProcessor.from_pretrained(model_id)
messages = [
    {
        "role": "system",
        "content": [{"type": "text", "text": "You are a helpful assistant."}]
    },
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "μμΈκ³Ό λΆμ° μ€ μ΄λκ° λ 컀?"}
        ]
    }
]
inputs = processor.apply_chat_template(
    messages, add_generation_prompt=True, tokenize=True,
    return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
    generation = model.generate(**inputs, max_new_tokens=8192, do_sample=False)
    generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
π§ Base Model: google/gemma-3-12b-it
The base model, google/gemma-3-12b-it, is a VLM developed by the Google team. For more technical details, refer to the Gemma 3 Technical Report.
π§± Model Architecture
| Property | Value | 
|---|---|
| Architecture | Gemma3ForConditionalGeneration | 
| Parameters | 12B | 
| Context Length | 128,000 tokens | 
| Tokenizer | Gemma3Tokenizer (BPE) | 
π Release Date
Mar 2025
This model was released in March 2025 as part of the Ko-Reasoning Series, which focuses on pushing the boundaries of open-source reasoning in Korean using modern LLMs.
π¬ Contact
For questions, collaborations, or deployment inquiries, please contact:
- π€ Hugging Face: https://huggingface.co/DimensionSTP
- βοΈ Email: [[email protected]]
π¦ Available Checkpoints
- β
 main: Final stable version from thelastbranch
- β All training artifacts available (tokenizer, config, model weights)
- Downloads last month
- 4