train_cb_789_1757596127

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1000
  • Num Input Tokens Seen: 352296

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 789
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.7976 0.5088 29 0.6185 18528
0.1769 1.0175 58 0.1000 35960
0.1434 1.5263 87 0.1033 53272
0.0966 2.0351 116 0.1114 71200
0.1937 2.5439 145 0.2848 89088
0.1799 3.0526 174 0.2855 107504
0.1376 3.5614 203 0.2217 126384
0.1736 4.0702 232 0.2966 143952
0.0914 4.5789 261 0.3254 161840
0.0036 5.0877 290 0.4225 179816
0.1632 5.5965 319 0.2599 197416
0.1906 6.1053 348 0.4071 214432
0.3629 6.6140 377 0.4295 233280
0.0784 7.1228 406 0.3864 251120
0.131 7.6316 435 0.4001 270128
0.0791 8.1404 464 0.4088 288216
0.2521 8.6491 493 0.4207 306648
0.0186 9.1579 522 0.4245 323296
0.08 9.6667 551 0.4157 340960

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_cb_789_1757596127

Adapter
(2014)
this model