LLaMA 3.2 3B β Physics Fine-Tuned (Camel-AI Dataset, LoRA + Unsloth)
This model is a LoRA fine-tuned version of unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit, trained on the Camel-AI Physics dataset to enhance reasoning and accuracy in physics-related instruction tasks.
It combines instruction tuning with domain specialization, producing concise, technically correct, and context-aware answers to physics questions.
Model Description
This model is fine-tuned for physics understanding, explanation, and reasoning tasks.
It improves over the base LLaMA 3.2 3B Instruct model in areas such as:
- Step-by-step reasoning for conceptual physics problems
- Clear and concise answers to physics questions
- Context-sensitive scientific explanations
- Instruction-following accuracy in domain-specific prompts
The fine-tuning was performed with parameter efficient LoRA adapters, enabling high-quality adaptation with low compute overhead.
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "your-username/llama-3.2-3b-physics-finetuned"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = "Explain why the sky appears blue using Rayleigh scattering."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 30