askatasuna commited on
Commit
fe50bb9
·
1 Parent(s): 4ea668a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -15
README.md CHANGED
@@ -16,9 +16,9 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.8676
20
- - Bleu: 0.0537
21
- - Gen Len: 19.0
22
 
23
  ## Model description
24
 
@@ -37,29 +37,34 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 0.02
41
  - train_batch_size: 16
42
  - eval_batch_size: 16
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
- - num_epochs: 10
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
52
  |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
53
- | No log | 1.0 | 55 | 2.2419 | 0.0523 | 19.0 |
54
- | No log | 2.0 | 110 | 2.0286 | 0.1719 | 19.0 |
55
- | No log | 3.0 | 165 | 2.1105 | 0.1719 | 19.0 |
56
- | No log | 4.0 | 220 | 1.9847 | 0.0393 | 19.0 |
57
- | No log | 5.0 | 275 | 1.9553 | 0.0523 | 19.0 |
58
- | No log | 6.0 | 330 | 2.0231 | 0.1719 | 19.0 |
59
- | No log | 7.0 | 385 | 1.9451 | 0.1719 | 19.0 |
60
- | No log | 8.0 | 440 | 1.9201 | 0.0537 | 19.0 |
61
- | No log | 9.0 | 495 | 1.8968 | 0.0537 | 19.0 |
62
- | 2.0859 | 10.0 | 550 | 1.8676 | 0.0537 | 19.0 |
 
 
 
 
 
63
 
64
 
65
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.3850
20
+ - Bleu: 4.7891
21
+ - Gen Len: 17.9507
22
 
23
  ## Model description
24
 
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
+ - learning_rate: 0.0002
41
  - train_batch_size: 16
42
  - eval_batch_size: 16
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - num_epochs: 15
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
52
  |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
53
+ | No log | 1.0 | 55 | 2.2953 | 0.285 | 19.0 |
54
+ | No log | 2.0 | 110 | 1.9083 | 0.3426 | 19.0 |
55
+ | No log | 3.0 | 165 | 1.7123 | 0.6444 | 18.6404 |
56
+ | No log | 4.0 | 220 | 1.6110 | 1.1193 | 17.7291 |
57
+ | No log | 5.0 | 275 | 1.5440 | 0.9035 | 17.8621 |
58
+ | No log | 6.0 | 330 | 1.4924 | 0.8067 | 17.8424 |
59
+ | No log | 7.0 | 385 | 1.4654 | 0.8635 | 17.8079 |
60
+ | No log | 8.0 | 440 | 1.4445 | 2.3215 | 17.6059 |
61
+ | No log | 9.0 | 495 | 1.4319 | 2.5679 | 17.4384 |
62
+ | 1.8308 | 10.0 | 550 | 1.4178 | 2.3622 | 17.7783 |
63
+ | 1.8308 | 11.0 | 605 | 1.4011 | 3.6065 | 17.6995 |
64
+ | 1.8308 | 12.0 | 660 | 1.3969 | 3.8257 | 17.8768 |
65
+ | 1.8308 | 13.0 | 715 | 1.3930 | 4.7373 | 17.8325 |
66
+ | 1.8308 | 14.0 | 770 | 1.3864 | 4.7501 | 17.9113 |
67
+ | 1.8308 | 15.0 | 825 | 1.3850 | 4.7891 | 17.9507 |
68
 
69
 
70
  ### Framework versions