MedGemma-4B Anatomy v2.0 (Production)
High-quality production model fine-tuned on 830 curated anatomy Q&A pairs.
Model Details
- Base Model: google/medgemma-4b-it (4B parameters)
 - Training Data: 830 anatomy questions with structured answers
 - Method: LoRA (r=32, ฮฑ=64)
 - Epochs: 6
 - Training Time: 0.69 hours
 - Hardware: A100 40GB GPU
 - Final Loss: 0.8466
 - Validation Loss: 1.1448
 
Training Configuration
- Max Sequence Length: 1024
- Batch Size: 2 (effective 16)
- Learning Rate: 0.00015
- LoRA Rank: 32
- LoRA Alpha: 64
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Answer Structure
All answers follow a standardized 5-section format:
- Overview & Pathophysiology - Mechanism and underlying processes
 - Clinical Presentation - Signs, symptoms, examination findings
 - Diagnostic Approach - Investigations and reasoning
 - Management Principles - Treatment approaches
 - Clinical Vignette - Realistic clinical scenario
 
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "krishna195/medgemma-anatomy-v2.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)
question = "What is the anatomical snuffbox and its clinical significance?"
prompt = f"<start_of_turn>user\n{question}<end_of_turn>\n<start_of_turn>model\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Performance
- Inference Speed: ~40-50 tokens/sec (A100), ~30-35 tokens/sec (T4)
 - Memory: 8-9GB (bfloat16), 3-4GB (4-bit)
 - Quality: Comprehensive structured answers, clinical reasoning
 
Improvements over v1.2
- 3.5x more training data (183 โ 830 questions)
 - Higher LoRA rank (8 โ 32) for better adaptation
 - More epochs (4 โ 6) for deeper learning
 - Better regularization with increased weight decay
 - Comprehensive target modules for full model adaptation
 
License
Apache 2.0
Citation
@misc{medgemma-anatomy-v2,
  title={MedGemma-4B Anatomy v2.0},
  author={Krishna195},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/krishna195/medgemma-anatomy-v2.0}
}
- Downloads last month
 - 24