Labira/LabiraPJOK_123_100_Full

This model is a fine-tuned version of indolem/indobert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0108
  • Validation Loss: 0.0014
  • Epoch: 99

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
4.8014 3.8239 0
3.5330 3.0989 1
3.0273 2.6526 2
2.6530 2.0593 3
2.2572 1.6401 4
1.7060 1.0829 5
1.2904 0.6494 6
0.9646 0.4921 7
0.6371 0.2708 8
0.4612 0.2947 9
0.4154 0.2030 10
0.4027 0.1670 11
0.2759 0.1051 12
0.2515 0.1313 13
0.1759 0.0651 14
0.1293 0.0732 15
0.1595 0.0472 16
0.0989 0.0647 17
0.0797 0.0566 18
0.1292 0.0351 19
0.1098 0.0743 20
0.1490 0.0591 21
0.0934 0.0558 22
0.0720 0.0330 23
0.0502 0.0265 24
0.0598 0.0235 25
0.0589 0.0272 26
0.0409 0.0243 27
0.0445 0.0199 28
0.0425 0.0395 29
0.0420 0.0252 30
0.0332 0.0194 31
0.0286 0.0178 32
0.0480 0.0184 33
0.0361 0.0279 34
0.0529 0.0195 35
0.0296 0.0194 36
0.0346 0.0143 37
0.0256 0.0177 38
0.0331 0.0098 39
0.0386 0.0086 40
0.0303 0.0053 41
0.0310 0.0154 42
0.0193 0.0024 43
0.1070 0.0090 44
0.0937 0.0123 45
0.0766 0.0112 46
0.0698 0.0057 47
0.0297 0.0043 48
0.0385 0.0117 49
0.0802 0.0181 50
0.1040 0.0072 51
0.0836 0.0163 52
0.0861 0.0060 53
0.0867 0.0079 54
0.1242 0.0041 55
0.1090 0.0070 56
0.0394 0.0042 57
0.0312 0.0041 58
0.0391 0.0020 59
0.0320 0.0023 60
0.0479 0.0135 61
0.0403 0.0017 62
0.0352 0.0019 63
0.0314 0.0030 64
0.0254 0.0020 65
0.0243 0.0013 66
0.0504 0.0022 67
0.0474 0.0023 68
0.0430 0.0036 69
0.0142 0.0021 70
0.0169 0.0014 71
0.0110 0.0013 72
0.0229 0.0011 73
0.0476 0.0008 74
0.0461 0.0012 75
0.0170 0.0013 76
0.0210 0.0020 77
0.0146 0.0021 78
0.0206 0.0019 79
0.0137 0.0021 80
0.0125 0.0015 81
0.0303 0.0026 82
0.0100 0.0019 83
0.0088 0.0015 84
0.0128 0.0016 85
0.0153 0.0018 86
0.0141 0.0018 87
0.0163 0.0017 88
0.0104 0.0014 89
0.0098 0.0014 90
0.0116 0.0013 91
0.0160 0.0015 92
0.0161 0.0016 93
0.0088 0.0015 94
0.0101 0.0015 95
0.0105 0.0015 96
0.0110 0.0015 97
0.0049 0.0014 98
0.0108 0.0014 99

Framework versions

  • Transformers 4.45.2
  • TensorFlow 2.17.0
  • Datasets 2.20.0
  • Tokenizers 0.20.1
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Labira/LabiraPJOK_456_100_Full

Finetuned
(386)
this model