PhoBert_Lexical_Dataset45K
This model is a fine-tuned version of vinai/phobert-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.4753
- Accuracy: 0.8913
- F1: 0.8822
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|---|---|---|---|---|---|
| No log | 0.2841 | 200 | 0.3590 | 0.8315 | 0.8226 |
| No log | 0.5682 | 400 | 0.2883 | 0.8752 | 0.8666 |
| No log | 0.8523 | 600 | 0.2859 | 0.8760 | 0.8676 |
| 0.3584 | 1.1364 | 800 | 0.2665 | 0.8822 | 0.8733 |
| 0.3584 | 1.4205 | 1000 | 0.2749 | 0.8817 | 0.8661 |
| 0.3584 | 1.7045 | 1200 | 0.2749 | 0.8788 | 0.8721 |
| 0.3584 | 1.9886 | 1400 | 0.2721 | 0.8915 | 0.8796 |
| 0.2564 | 2.2727 | 1600 | 0.2601 | 0.8897 | 0.8823 |
| 0.2564 | 2.5568 | 1800 | 0.2470 | 0.8930 | 0.8829 |
| 0.2564 | 2.8409 | 2000 | 0.2494 | 0.8958 | 0.8862 |
| 0.2235 | 3.125 | 2200 | 0.2616 | 0.8944 | 0.8848 |
| 0.2235 | 3.4091 | 2400 | 0.2578 | 0.8887 | 0.8816 |
| 0.2235 | 3.6932 | 2600 | 0.2665 | 0.8905 | 0.8816 |
| 0.2235 | 3.9773 | 2800 | 0.2510 | 0.8934 | 0.8850 |
| 0.1979 | 4.2614 | 3000 | 0.2717 | 0.8906 | 0.8827 |
| 0.1979 | 4.5455 | 3200 | 0.2817 | 0.8849 | 0.8784 |
| 0.1979 | 4.8295 | 3400 | 0.2865 | 0.8835 | 0.8770 |
| 0.1693 | 5.1136 | 3600 | 0.2833 | 0.8921 | 0.8840 |
| 0.1693 | 5.3977 | 3800 | 0.2817 | 0.8950 | 0.8856 |
| 0.1693 | 5.6818 | 4000 | 0.3206 | 0.8837 | 0.8772 |
| 0.1693 | 5.9659 | 4200 | 0.3095 | 0.8921 | 0.8805 |
| 0.1468 | 6.25 | 4400 | 0.3032 | 0.8925 | 0.8819 |
| 0.1468 | 6.5341 | 4600 | 0.3090 | 0.8896 | 0.8805 |
| 0.1468 | 6.8182 | 4800 | 0.3135 | 0.8948 | 0.8854 |
| 0.128 | 7.1023 | 5000 | 0.3523 | 0.8838 | 0.8770 |
| 0.128 | 7.3864 | 5200 | 0.3495 | 0.8956 | 0.8850 |
| 0.128 | 7.6705 | 5400 | 0.3480 | 0.8956 | 0.8859 |
| 0.128 | 7.9545 | 5600 | 0.3398 | 0.8909 | 0.8829 |
| 0.1086 | 8.2386 | 5800 | 0.3459 | 0.8930 | 0.8844 |
| 0.1086 | 8.5227 | 6000 | 0.3390 | 0.8954 | 0.8864 |
| 0.1086 | 8.8068 | 6200 | 0.3383 | 0.8952 | 0.8861 |
| 0.0943 | 9.0909 | 6400 | 0.3674 | 0.8928 | 0.8825 |
| 0.0943 | 9.375 | 6600 | 0.3632 | 0.8954 | 0.8860 |
| 0.0943 | 9.6591 | 6800 | 0.3776 | 0.8889 | 0.8794 |
| 0.0943 | 9.9432 | 7000 | 0.3761 | 0.8935 | 0.8846 |
| 0.0838 | 10.2273 | 7200 | 0.3892 | 0.8930 | 0.8837 |
| 0.0838 | 10.5114 | 7400 | 0.4170 | 0.8882 | 0.8800 |
| 0.0838 | 10.7955 | 7600 | 0.4109 | 0.8920 | 0.8823 |
| 0.0718 | 11.0795 | 7800 | 0.4371 | 0.8902 | 0.8808 |
| 0.0718 | 11.3636 | 8000 | 0.4116 | 0.8906 | 0.8820 |
| 0.0718 | 11.6477 | 8200 | 0.4400 | 0.8880 | 0.8802 |
| 0.0718 | 11.9318 | 8400 | 0.4392 | 0.8889 | 0.8809 |
| 0.0621 | 12.2159 | 8600 | 0.4430 | 0.8919 | 0.8825 |
| 0.0621 | 12.5 | 8800 | 0.4558 | 0.8904 | 0.8819 |
| 0.0621 | 12.7841 | 9000 | 0.4548 | 0.8930 | 0.8840 |
| 0.0572 | 13.0682 | 9200 | 0.4681 | 0.8904 | 0.8816 |
| 0.0572 | 13.3523 | 9400 | 0.4781 | 0.8891 | 0.8799 |
| 0.0572 | 13.6364 | 9600 | 0.4710 | 0.8900 | 0.8816 |
| 0.0572 | 13.9205 | 9800 | 0.4691 | 0.8887 | 0.8800 |
| 0.0514 | 14.2045 | 10000 | 0.4772 | 0.8912 | 0.8823 |
| 0.0514 | 14.4886 | 10200 | 0.4740 | 0.8893 | 0.8806 |
| 0.0514 | 14.7727 | 10400 | 0.4753 | 0.8913 | 0.8822 |
Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for RonTon05/PhoBert_Lexical_Dataset45K
Base model
vinai/phobert-base-v2