ELlama1

Серия LLM обученных на греческом языке

ELlama1-0.7b

Модель в основе которой лежит Qwen (да-да не удивляейтесь).

ELlama1-0.7b - pretrain модель, обученная на семпле из fineweb2: fineweb2-modern-greece-sample.

Quick Start

Hugging face

import torch
from transformers import AutoModelForCausalLM, PreTrainedTokenizerFast

device = "cuda"

model_path = "dmitry315/ELlama1-0.7b"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True)
model.to(device)
tokenizer = PreTrainedTokenizerFast.from_pretrained(model_path, trust_remote_code=True)

text = "Γεια σας , δεν ξερω τιποτα για τον Ηροδοτο , μπορειτε να μου πειτε γι ' αυτον ;"
    
with torch.no_grad():
    inputs = tokenizer(
        text, 
        return_tensors="pt", 
        padding=True, 
        truncation=True, 
        max_length=128
    ).to(device)
    outputs = model.generate(
        inputs.input_ids,
        max_length=128,
        temperature=1.0,
        top_p=50,
        do_sample=True,
        pad_token_id=tokenizer.eos_token_id,
        num_return_sequences=1
    )
    generated_text = tokenizer.decode(
        outputs[0], 
        skip_special_tokens=True
    )

print(generated_text)

Github

Код обучения: ELlama

Downloads last month
36
Safetensors
Model size
0.7B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train dmitry315/ELlama1-0.7b

Collection including dmitry315/ELlama1-0.7b