Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -54,8 +54,7 @@ import json
|
|
| 54 |
import torch
|
| 55 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 56 |
|
| 57 |
-
|
| 58 |
-
model_id = "/data/nas-2/seungduk/eeve2/babel/datasets/gemma-3-4b-rosetta-revision4-stage2"
|
| 59 |
model = AutoModelForCausalLM.from_pretrained(
|
| 60 |
model_id,
|
| 61 |
dtype=torch.bfloat16,
|
|
@@ -154,10 +153,11 @@ The model was fine-tuned on multilingual translation data to optimize performanc
|
|
| 154 |
|
| 155 |
### Translation Quality Benchmarks
|
| 156 |
|
| 157 |
-
The following CHrF++ scores demonstrate the model's competitive performance compared to other state-of-the-art translation models on English to Korean translation:
|
| 158 |
|
| 159 |
-
| Model | CHrF++ Score |
|
| 160 |
|------------------------------------|--------------|
|
|
|
|
| 161 |
| yanolja/YanoljaNEXT-Rosetta-20B | 33.87 |
|
| 162 |
| google/gemini-2.0-flash-001 | 33.81 |
|
| 163 |
| openai/gpt-oss-120b | 31.51 |
|
|
|
|
| 54 |
import torch
|
| 55 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 56 |
|
| 57 |
+
model_id = "yanolja/YanoljaNEXT-Rosetta-4B"
|
|
|
|
| 58 |
model = AutoModelForCausalLM.from_pretrained(
|
| 59 |
model_id,
|
| 60 |
dtype=torch.bfloat16,
|
|
|
|
| 153 |
|
| 154 |
### Translation Quality Benchmarks
|
| 155 |
|
| 156 |
+
The following CHrF++ scores (WMT24++) demonstrate the model's competitive performance compared to other state-of-the-art translation models on English to Korean translation:
|
| 157 |
|
| 158 |
+
| Model | CHrF++ Score (WMT24++) |
|
| 159 |
|------------------------------------|--------------|
|
| 160 |
+
| yanolja/YanoljaNEXT-Rosetta-12B | 34.75 |
|
| 161 |
| yanolja/YanoljaNEXT-Rosetta-20B | 33.87 |
|
| 162 |
| google/gemini-2.0-flash-001 | 33.81 |
|
| 163 |
| openai/gpt-oss-120b | 31.51 |
|