Indonesian Natural Language Inference
Collection
Indonesian natural language inference (NLI) models trained on various NLI datasets. Evaluated on IndoNLI as benchmark.
•
4 items
•
Updated
IndoBERT Lite Base IndoNLI Distil mDeBERTa is a natural language inference (NLI) model based on the ALBERT model. The model was originally the pre-trained indobenchmark/indobert-lite-base-p1 model, which is then fine-tuned on IndoNLI's dataset consisting of Indonesian Wikipedia, news, and Web articles [1], whilst being distilled from MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7.
dev Acc. |
test_lay Acc. |
test_expert Acc. |
|
|---|---|---|---|
IndoNLI |
77.19 | 74.42 | 61.22 |
| Model | #params | Arch. | Training/Validation data (text) |
|---|---|---|---|
indobert-lite-base-p1-indonli-distil-mdeberta |
11.7M | ALBERT Base | IndoNLI |
The following hyperparameters were used during training:
learning_rate: 2e-05train_batch_size: 16eval_batch_size: 16seed: 42optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08lr_scheduler_type: linearnum_epochs: 5| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|---|---|---|---|---|---|---|---|
| 0.5053 | 1.0 | 646 | 0.4511 | 0.7506 | 0.7462 | 0.7530 | 0.7445 |
| 0.4516 | 2.0 | 1292 | 0.4458 | 0.7692 | 0.7683 | 0.7684 | 0.7697 |
| 0.4192 | 3.0 | 1938 | 0.4433 | 0.7701 | 0.7677 | 0.7685 | 0.7673 |
| 0.3647 | 4.0 | 2584 | 0.4497 | 0.7720 | 0.7699 | 0.7697 | 0.7701 |
| 0.3502 | 5.0 | 3230 | 0.4530 | 0.7679 | 0.7661 | 0.7658 | 0.7668 |
[1] Mahendra, R., Aji, A. F., Louvan, S., Rahman, F., & Vania, C. (2021, November). IndoNLI: A Natural Language Inference Dataset for Indonesian. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Base model
indobenchmark/indobert-lite-base-p1