smollm-360m-code-lora

This is a merged fine-tuned version of HuggingFaceTB/SmolLM2-360M using LoRA.

Model Details

  • Base Model: HuggingFaceTB/SmolLM2-360M
  • Fine-tuning Method: LoRA (Low-Rank Adaptation) - Merged
  • Training: Fine-tuned on Python code generation tasks

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "teamaMohamed115/smollm-360m-code-lora"

# Load model (adapter already merged)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
Downloads last month
20
Safetensors
Model size
0.4B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for teamaMohamed115/smollm-360m-code-lora

Finetuned
(77)
this model

Space using teamaMohamed115/smollm-360m-code-lora 1