lemexp-task1-v3-template_full_nodefs-Llama-3.2-1B-8lr-12epochs-no-eos
This model is a fine-tuned version of meta-llama/Llama-3.2-1B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1352
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.306 | 0.2000 | 3114 | 0.3017 |
| 0.2867 | 0.4000 | 6228 | 0.2795 |
| 0.2775 | 0.6000 | 9342 | 0.2688 |
| 0.2701 | 0.8001 | 12456 | 0.2583 |
| 0.268 | 1.0001 | 15570 | 0.2530 |
| 0.2609 | 1.2001 | 18684 | 0.2470 |
| 0.2549 | 1.4001 | 21798 | 0.2425 |
| 0.2542 | 1.6001 | 24912 | 0.2384 |
| 0.2489 | 1.8001 | 28026 | 0.2377 |
| 0.2469 | 2.0001 | 31140 | 0.2334 |
| 0.2416 | 2.2001 | 34254 | 0.2419 |
| 0.2402 | 2.4002 | 37368 | 0.2269 |
| 0.2401 | 2.6002 | 40482 | 0.2255 |
| 0.2368 | 2.8002 | 43596 | 0.2398 |
| 0.2309 | 3.0002 | 46710 | 0.2226 |
| 0.2289 | 3.2002 | 49824 | 0.2207 |
| 0.226 | 3.4002 | 52938 | 0.2194 |
| 0.2249 | 3.6002 | 56052 | 0.2178 |
| 0.2214 | 3.8002 | 59166 | 0.2173 |
| 0.2207 | 4.0003 | 62280 | 0.2128 |
| 0.2158 | 4.2003 | 65394 | 0.2104 |
| 0.2147 | 4.4003 | 68508 | 0.2071 |
| 0.2139 | 4.6003 | 71622 | 0.2083 |
| 0.2094 | 4.8003 | 74736 | 0.2077 |
| 0.2072 | 5.0003 | 77850 | 0.1972 |
| 0.2039 | 5.2003 | 80964 | 0.1964 |
| 0.2036 | 5.4003 | 84078 | 0.1948 |
| 0.2031 | 5.6004 | 87192 | 0.1950 |
| 0.1964 | 5.8004 | 90306 | 0.1934 |
| 0.1982 | 6.0004 | 93420 | 0.1839 |
| 0.1929 | 6.2004 | 96534 | 0.1882 |
| 0.1917 | 6.4004 | 99648 | 0.1845 |
| 0.1917 | 6.6004 | 102762 | 0.1811 |
| 0.1866 | 6.8004 | 105876 | 0.1800 |
| 0.1885 | 7.0004 | 108990 | 0.1778 |
| 0.182 | 7.2005 | 112104 | 0.1756 |
| 0.1798 | 7.4005 | 115218 | 0.1734 |
| 0.1806 | 7.6005 | 118332 | 0.1758 |
| 0.175 | 7.8005 | 121446 | 0.1728 |
| 0.1729 | 8.0005 | 124560 | 0.1737 |
| 0.1695 | 8.2005 | 127674 | 0.1673 |
| 0.1674 | 8.4005 | 130788 | 0.1657 |
| 0.1676 | 8.6006 | 133902 | 0.1623 |
| 0.1679 | 8.8006 | 137016 | 0.1609 |
| 0.1617 | 9.0006 | 140130 | 0.1596 |
| 0.1572 | 9.2006 | 143244 | 0.1590 |
| 0.1565 | 9.4006 | 146358 | 0.1570 |
| 0.1544 | 9.6006 | 149472 | 0.1537 |
| 0.1525 | 9.8006 | 152586 | 0.1512 |
| 0.1493 | 10.0006 | 155700 | 0.1524 |
| 0.1471 | 10.2007 | 158814 | 0.1479 |
| 0.1466 | 10.4007 | 161928 | 0.1452 |
| 0.1431 | 10.6007 | 165042 | 0.1442 |
| 0.1403 | 10.8007 | 168156 | 0.1418 |
| 0.1381 | 11.0007 | 171270 | 0.1400 |
| 0.1354 | 11.2007 | 174384 | 0.1383 |
| 0.1341 | 11.4007 | 177498 | 0.1374 |
| 0.1294 | 11.6007 | 180612 | 0.1364 |
| 0.1314 | 11.8008 | 183726 | 0.1352 |
Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 4.2.0
- Tokenizers 0.21.0
- Downloads last month
- 400
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for yalhessi/lemexp-task1-v3-template_full_nodefs-Llama-3.2-1B-8lr-12epochs-no-eos
Base model
meta-llama/Llama-3.2-1B