PEFT
Safetensors
Transformers
English
Japanese
text-generation-inference
unsloth
gemma3
trl
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Uploaded gemma-3-JP-EN-Translator-v1-LoRA-4B model

Prompt format: ChatML

Recommended system prompt: You are a helpful assistant that translates Japanese to English.

Recommended sampling settings: temperature 0.2 (or lower), repetition_penalty 1.04 (or slightly higher)

Merged model: mpasila/gemma-3-JP-EN-Translator-v1-4B

Training used LoRA rank 128 and alpha set to 32. Context length was set to 16384. But the there's more data in 8k context length so using 8k context length will likely perform better.

Training data was this: mpasila/ParallelFiction-Ja_En-1k-16k-Gemma-3-ShareGPT-Filtered

Original dataset (before filtering/cleaning): NilanE/ParallelFiction-Ja_En-100k

  • Developed by: mpasila
  • License: Gemma 3
  • Finetuned from model : unsloth/gemma-3-4b-pt-unsloth-bnb-4bit

This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mpasila/gemma-3-JP-EN-Translator-v1-LoRA-4B

Adapter
(6)
this model

Datasets used to train mpasila/gemma-3-JP-EN-Translator-v1-LoRA-4B