lemexp-task1-v3-template_full-Llama-3.2-1B-8lr-12epochs-no-eos
This model is a fine-tuned version of meta-llama/Llama-3.2-1B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1383
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.3169 | 0.2000 | 3114 | 0.3013 |
| 0.2958 | 0.4000 | 6228 | 0.2757 |
| 0.2843 | 0.6000 | 9342 | 0.2796 |
| 0.2781 | 0.8001 | 12456 | 0.2641 |
| 0.2731 | 1.0001 | 15570 | 0.2644 |
| 0.2682 | 1.2001 | 18684 | 0.2544 |
| 0.2621 | 1.4001 | 21798 | 0.2642 |
| 0.2592 | 1.6001 | 24912 | 0.2508 |
| 0.2547 | 1.8001 | 28026 | 0.2454 |
| 0.2506 | 2.0001 | 31140 | 0.2388 |
| 0.2469 | 2.2001 | 34254 | 0.2413 |
| 0.2449 | 2.4002 | 37368 | 0.2360 |
| 0.2461 | 2.6002 | 40482 | 0.2311 |
| 0.2409 | 2.8002 | 43596 | 0.2259 |
| 0.2361 | 3.0002 | 46710 | 0.2251 |
| 0.2335 | 3.2002 | 49824 | 0.2280 |
| 0.2302 | 3.4002 | 52938 | 0.2239 |
| 0.2292 | 3.6002 | 56052 | 0.2168 |
| 0.2257 | 3.8002 | 59166 | 0.2140 |
| 0.2244 | 4.0003 | 62280 | 0.2129 |
| 0.2206 | 4.2003 | 65394 | 0.2101 |
| 0.2183 | 4.4003 | 68508 | 0.2088 |
| 0.2164 | 4.6003 | 71622 | 0.2109 |
| 0.2131 | 4.8003 | 74736 | 0.2116 |
| 0.2102 | 5.0003 | 77850 | 0.1986 |
| 0.2082 | 5.2003 | 80964 | 0.2019 |
| 0.207 | 5.4003 | 84078 | 0.2030 |
| 0.2058 | 5.6004 | 87192 | 0.2017 |
| 0.2011 | 5.8004 | 90306 | 0.1928 |
| 0.2019 | 6.0004 | 93420 | 0.1910 |
| 0.1953 | 6.2004 | 96534 | 0.1874 |
| 0.196 | 6.4004 | 99648 | 0.1884 |
| 0.1966 | 6.6004 | 102762 | 0.1845 |
| 0.1901 | 6.8004 | 105876 | 0.1873 |
| 0.1915 | 7.0004 | 108990 | 0.1816 |
| 0.1849 | 7.2005 | 112104 | 0.1784 |
| 0.1819 | 7.4005 | 115218 | 0.1763 |
| 0.1837 | 7.6005 | 118332 | 0.1745 |
| 0.1775 | 7.8005 | 121446 | 0.1729 |
| 0.1753 | 8.0005 | 124560 | 0.1725 |
| 0.1725 | 8.2005 | 127674 | 0.1686 |
| 0.1716 | 8.4005 | 130788 | 0.1674 |
| 0.1702 | 8.6006 | 133902 | 0.1656 |
| 0.1712 | 8.8006 | 137016 | 0.1655 |
| 0.1633 | 9.0006 | 140130 | 0.1661 |
| 0.1591 | 9.2006 | 143244 | 0.1593 |
| 0.1582 | 9.4006 | 146358 | 0.1600 |
| 0.1568 | 9.6006 | 149472 | 0.1579 |
| 0.1552 | 9.8006 | 152586 | 0.1573 |
| 0.1512 | 10.0006 | 155700 | 0.1522 |
| 0.1484 | 10.2007 | 158814 | 0.1509 |
| 0.1497 | 10.4007 | 161928 | 0.1510 |
| 0.145 | 10.6007 | 165042 | 0.1478 |
| 0.1423 | 10.8007 | 168156 | 0.1451 |
| 0.1402 | 11.0007 | 171270 | 0.1439 |
| 0.1369 | 11.2007 | 174384 | 0.1441 |
| 0.1357 | 11.4007 | 177498 | 0.1410 |
| 0.1309 | 11.6007 | 180612 | 0.1390 |
| 0.1334 | 11.8008 | 183726 | 0.1383 |
Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 4.2.0
- Tokenizers 0.21.0
- Downloads last month
- 436
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for yalhessi/lemexp-task1-v3-template_full-Llama-3.2-1B-8lr-12epochs-no-eos
Base model
meta-llama/Llama-3.2-1B