train_svamp_456_1757596109

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the svamp dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0901
  • Num Input Tokens Seen: 704688

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 456
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
2.1169 0.5 79 1.8305 35424
0.2052 1.0 158 0.1453 70560
0.2513 1.5 237 0.1199 105728
0.0921 2.0 316 0.1245 140912
0.0865 2.5 395 0.0999 176272
0.0475 3.0 474 0.1334 211360
0.0515 3.5 553 0.1016 246912
0.0637 4.0 632 0.0945 281968
0.1302 4.5 711 0.1023 317392
0.1022 5.0 790 0.0963 352128
0.0214 5.5 869 0.0901 387744
0.033 6.0 948 0.0930 422800
0.063 6.5 1027 0.1005 457968
0.0314 7.0 1106 0.0920 493104
0.0399 7.5 1185 0.0946 528304
0.0181 8.0 1264 0.0965 563936
0.0449 8.5 1343 0.0946 598976
0.0193 9.0 1422 0.0967 634144
0.0455 9.5 1501 0.0954 669632
0.0163 10.0 1580 0.0962 704688

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_svamp_456_1757596109

Adapter
(2014)
this model