YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

dissimilar_FullFT

Fine-tuned LLaMA model on QA_CODE_SUMMARIZATION dataset.

  • LoRA: Full Fine-Tuning
  • LoRA Rank: N/A
  • Tasks: QA_CODE_SUMMARIZATION
  • Base Model: LLaMA 1B
  • Optimizer: AdamW
  • Batch Size: 4

Trained using the ๐Ÿค— Transformers Trainer API.

Downloads last month
-
Safetensors
Model size
1B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support