smollm-360m-code-lora
This is a merged fine-tuned version of HuggingFaceTB/SmolLM2-360M using LoRA.
Model Details
- Base Model: HuggingFaceTB/SmolLM2-360M
- Fine-tuning Method: LoRA (Low-Rank Adaptation) - Merged
- Training: Fine-tuned on Python code generation tasks
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "teamaMohamed115/smollm-360m-code-lora"
# Load model (adapter already merged)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
- Downloads last month
- 20
Model tree for teamaMohamed115/smollm-360m-code-lora
Base model
HuggingFaceTB/SmolLM2-360M