πŸŽ“ Qwen2.5-Coder-1.5B-Educational

License HumanEval Base Model


πŸ“‹ Overview

Qwen2.5-Coder-1.5B-Educational is a LoRA adapter fine-tuned on the Qwen2.5-Coder-1.5B-Instruct base model, specifically optimized for educational code generation in Python. This model excels at producing clear, well-documented, and pedagogically sound code examples.

⚠️ Model Updated: Now using checkpoint-500 (best performing on HumanEval benchmarks)

Key Features

  • 🎯 Optimized for Education: Generates clear, pythonic code with explanations
  • πŸ“ˆ Strong Performance: 64.0% pass@1 on HumanEval benchmark
  • ⚑ Efficient: LoRA fine-tuning enables fast inference and low memory usage
  • πŸ”„ Balanced: Maintains correctness while prioritizing readability

πŸ“Š Performance Metrics

HumanEval Benchmark Results

Metric Score Comparison
Pass@1 64.0% vs 65-70% base model
Problems Passed 105/164 Excellent generalization
Training Loss 0.5695 Optimal convergence
Training Steps 500 Best checkpoint

Why Checkpoint-500 Over Checkpoint-2000?

After rigorous evaluation across multiple checkpoints, checkpoint-500 emerged as the optimal choice:

Checkpoint Steps Final Loss HumanEval Pass@1 Verdict
checkpoint-500 500 0.5695 64.0% βœ… Selected
checkpoint-2000 2000 0.5300 57.3% ❌ Overfitted

Key Insights:

  • βœ… Better Generalization: Higher HumanEval score despite slightly higher loss
  • βœ… Educational Quality: Maintains clear, pedagogical code style
  • βœ… No Overfitting: Avoids memorization patterns seen in later checkpoints
  • βœ… Optimal Balance: Best trade-off between correctness and readability

πŸš€ Quick Start

Installation

pip install transformers peft torch

Basic Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model and adapter
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-Coder-1.5B-Instruct",
    device_map="auto",
    torch_dtype="auto"
)

model = PeftModel.from_pretrained(
    base_model,
    "Beebey/qwen-coder-1.5b-educational"
)

tokenizer = AutoTokenizer.from_pretrained(
    "Beebey/qwen-coder-1.5b-educational"
)

# Generate code
prompt = "Instruction: Write a Python function to check if a number is prime\nRΓ©ponse:\n"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=200,
    temperature=0.7,
    top_p=0.9,
    do_sample=True
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Advanced Usage with Generation Parameters

# For more deterministic outputs
outputs = model.generate(
    **inputs,
    max_new_tokens=300,
    temperature=0.2,
    top_p=0.95,
    repetition_penalty=1.1,
    do_sample=True
)

# For creative/exploratory code
outputs = model.generate(
    **inputs,
    max_new_tokens=400,
    temperature=0.9,
    top_k=50,
    do_sample=True
)

πŸ—οΈ Model Architecture

Base Model

LoRA Configuration

{
    "r": 8,
    "lora_alpha": 16,
    "lora_dropout": 0.05,
    "target_modules": ["q_proj", "v_proj"],
    "task_type": "CAUSAL_LM"
}

🎯 Training Details

Dataset

  • Source: OpenCoder-LLM/opc-sft-stage2
  • Subset: educational_instruct
  • Focus: Python programming with educational emphasis
  • Examples: High-quality instruction-response pairs

Training Configuration

# Hyperparameters
learning_rate = 2e-4
warmup_steps = 50
max_steps = 500
per_device_train_batch_size = 16
gradient_accumulation_steps = 4
effective_batch_size = 1024

# Optimization
optimizer = "adamw_torch_xla"
lr_scheduler = "cosine"
weight_decay = 0.01

# Model Settings
sequence_length = 256
precision = "bfloat16"

Training Infrastructure

  • Hardware: TPU v6e-16 (Google Cloud)
  • Training Time: ~11 minutes
  • Cost Efficiency: Highly optimized TPU training
  • Framework: Hugging Face Transformers + PEFT

πŸ’ͺ Model Strengths

Code Quality

  • βœ… Pythonic Idioms: Follows PEP 8 and best practices
  • βœ… Clear Variable Names: Self-documenting code
  • βœ… Type Hints: Modern Python typing annotations
  • βœ… Docstrings: Comprehensive function documentation

Educational Value

  • πŸ“š Explanatory Comments: Inline explanations of logic
  • πŸŽ“ Step-by-Step Solutions: Logical problem-solving approach
  • πŸ’‘ Best Practices: Teaches proper coding patterns
  • πŸ” Error Handling: Includes defensive programming

Performance

  • ⚑ Fast Inference: Efficient LoRA architecture
  • 🎯 High Accuracy: 64% HumanEval pass rate
  • πŸ”„ Good Generalization: Works well on unseen problems
  • πŸ“Š Consistent Results: Stable and reproducible outputs

πŸ“ˆ Benchmark Results

HumanEval Evaluation

The model was evaluated on the complete HumanEval benchmark (164 programming problems):

  • Total Problems: 164
  • Problems Passed: 105
  • Pass@1 Score: 64.0%
  • Comparison: 91-96% of base model performance

This demonstrates that the educational fine-tuning maintains strong algorithmic correctness while improving code clarity and documentation.


πŸŽ“ Use Cases

Ideal For

  • πŸ‘¨β€πŸŽ“ Educational Platforms: Code tutoring and learning apps
  • πŸ“– Documentation: Generating code examples with explanations
  • 🏫 Teaching: Creating instructional programming materials
  • πŸ’» Code Review: Suggesting clear, readable implementations

Not Recommended For

  • ❌ Production Critical Systems: Use thoroughly tested code
  • ❌ Security-Sensitive Applications: Requires manual security review
  • ❌ Complex Enterprise Systems: May need additional context
  • ❌ Specialized Domains: Outside Python/general programming

⚠️ Limitations

  • Language Focus: Primarily optimized for Python
  • Context Window: Limited to base model's context length
  • Domain Knowledge: General programming, not domain-specific
  • Code Review: Generated code should always be reviewed
  • Hallucinations: May occasionally generate plausible but incorrect code

πŸ“„ License

This model is released under the Apache 2.0 License.

Copyright 2025 Beebey

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

πŸ“š Citation

If you use this model in your research or applications, please cite:

@misc{qwen-coder-educational-2025,
  author = {Beebey},
  title = {Qwen2.5-Coder-1.5B-Educational: A LoRA Adapter for Educational Code Generation},
  year = {2025},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/Beebey/qwen-coder-1.5b-educational}},
  note = {Fine-tuned on OpenCoder educational instruction dataset}
}

🀝 Acknowledgments

  • Base Model: Qwen Team for Qwen2.5-Coder-1.5B-Instruct
  • Dataset: OpenCoder-LLM for the educational instruction dataset
  • Framework: Hugging Face Transformers and PEFT
  • Infrastructure: Google Cloud TPU v6e for efficient training

πŸ“ž Contact & Support


Made with ❀️ for the educational coding community

Downloads last month
34
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Beebey/qwen-coder-1.5b-educational

Base model

Qwen/Qwen2.5-1.5B
Adapter
(16)
this model

Dataset used to train Beebey/qwen-coder-1.5b-educational