Datasets:
Tasks:
Text Classification
Modalities:
Text
Sub-tasks:
natural-language-inference
Size:
1M - 10M
ArXiv:
License:
| annotations_creators: | |
| - machine-generated | |
| language_creators: | |
| - machine-generated | |
| language: | |
| - as | |
| - bn | |
| - gu | |
| - hi | |
| - kn | |
| - ml | |
| - mr | |
| - or | |
| - pa | |
| - ta | |
| - te | |
| license: | |
| - cc0-1.0 | |
| multilinguality: | |
| - multilingual | |
| pretty_name: IndicXNLI | |
| size_categories: | |
| - 1M<n<10M | |
| source_datasets: | |
| - original | |
| task_categories: | |
| - text-classification | |
| task_ids: | |
| - natural-language-inference | |
| dataset_info: | |
| - config_name: as | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 172049648 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2097452 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 1042526 | |
| num_examples: 2490 | |
| download_size: 74371257 | |
| dataset_size: 175189626 | |
| - config_name: bn | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 174095464 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2113149 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 1044438 | |
| num_examples: 2490 | |
| download_size: 74466392 | |
| dataset_size: 177253051 | |
| - config_name: gu | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 167102046 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2016682 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 992611 | |
| num_examples: 2490 | |
| download_size: 73329179 | |
| dataset_size: 170111339 | |
| - config_name: hi | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 175582034 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2109705 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 1043487 | |
| num_examples: 2490 | |
| download_size: 74840429 | |
| dataset_size: 178735226 | |
| - config_name: kn | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 188780510 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2309807 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 1139277 | |
| num_examples: 2490 | |
| download_size: 78285369 | |
| dataset_size: 192229594 | |
| - config_name: ml | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 187851515 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2362836 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 1171953 | |
| num_examples: 2490 | |
| download_size: 77263292 | |
| dataset_size: 191386304 | |
| - config_name: mr | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 170752688 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2078079 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 1028494 | |
| num_examples: 2490 | |
| download_size: 73035248 | |
| dataset_size: 173859261 | |
| - config_name: or | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 173273552 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2135557 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 1047017 | |
| num_examples: 2490 | |
| download_size: 73223669 | |
| dataset_size: 176456126 | |
| - config_name: pa | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 172665273 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2077462 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 1030309 | |
| num_examples: 2490 | |
| download_size: 73760675 | |
| dataset_size: 175773044 | |
| - config_name: ta | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 206105782 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2520985 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 1637644 | |
| num_examples: 3238 | |
| download_size: 79964341 | |
| dataset_size: 210264411 | |
| - config_name: te | |
| features: | |
| - name: premise | |
| dtype: string | |
| - name: hypothesis | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': entailment | |
| '1': neutral | |
| '2': contradiction | |
| splits: | |
| - name: train | |
| num_bytes: 175597688 | |
| num_examples: 392702 | |
| - name: test | |
| num_bytes: 2145349 | |
| num_examples: 5010 | |
| - name: validation | |
| num_bytes: 1056174 | |
| num_examples: 2490 | |
| download_size: 75184880 | |
| dataset_size: 178799211 | |
| configs: | |
| - config_name: as | |
| data_files: | |
| - split: train | |
| path: as/train-* | |
| - split: test | |
| path: as/test-* | |
| - split: validation | |
| path: as/validation-* | |
| - config_name: bn | |
| data_files: | |
| - split: train | |
| path: bn/train-* | |
| - split: test | |
| path: bn/test-* | |
| - split: validation | |
| path: bn/validation-* | |
| - config_name: gu | |
| data_files: | |
| - split: train | |
| path: gu/train-* | |
| - split: test | |
| path: gu/test-* | |
| - split: validation | |
| path: gu/validation-* | |
| - config_name: hi | |
| data_files: | |
| - split: train | |
| path: hi/train-* | |
| - split: test | |
| path: hi/test-* | |
| - split: validation | |
| path: hi/validation-* | |
| - config_name: kn | |
| data_files: | |
| - split: train | |
| path: kn/train-* | |
| - split: test | |
| path: kn/test-* | |
| - split: validation | |
| path: kn/validation-* | |
| - config_name: ml | |
| data_files: | |
| - split: train | |
| path: ml/train-* | |
| - split: test | |
| path: ml/test-* | |
| - split: validation | |
| path: ml/validation-* | |
| - config_name: mr | |
| data_files: | |
| - split: train | |
| path: mr/train-* | |
| - split: test | |
| path: mr/test-* | |
| - split: validation | |
| path: mr/validation-* | |
| - config_name: or | |
| data_files: | |
| - split: train | |
| path: or/train-* | |
| - split: test | |
| path: or/test-* | |
| - split: validation | |
| path: or/validation-* | |
| - config_name: pa | |
| data_files: | |
| - split: train | |
| path: pa/train-* | |
| - split: test | |
| path: pa/test-* | |
| - split: validation | |
| path: pa/validation-* | |
| - config_name: ta | |
| data_files: | |
| - split: train | |
| path: ta/train-* | |
| - split: test | |
| path: ta/test-* | |
| - split: validation | |
| path: ta/validation-* | |
| - config_name: te | |
| data_files: | |
| - split: train | |
| path: te/train-* | |
| - split: test | |
| path: te/test-* | |
| - split: validation | |
| path: te/validation-* | |
| # Dataset Card for "IndicXNLI" | |
| ## Table of Contents | |
| - [Dataset Card for "IndicXNLI"](#dataset-card-for-indicxnli) | |
| - [Table of Contents](#table-of-contents) | |
| - [Dataset Description](#dataset-description) | |
| - [Dataset Summary](#dataset-summary) | |
| - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) | |
| - [Languages](#languages) | |
| - [Dataset Structure](#dataset-structure) | |
| - [Data Instances](#data-instances) | |
| - [Data Fields](#data-fields) | |
| - [Data Splits](#data-splits) | |
| ## Dataset Description | |
| - **Homepage:** <https://github.com/divyanshuaggarwal/IndicXNLI> | |
| - **Paper:** [IndicXNLI: Evaluating Multilingual Inference for Indian Languages](https://arxiv.org/abs/2204.08776) | |
| - **Point of Contact:** [Divyanshu Aggarwal](mailto:[email protected]) | |
| ### Dataset Summary | |
| INDICXNLI is similar to existing | |
| XNLI dataset in shape/form, but focusses on Indic language family. INDICXNLI include NLI | |
| data for eleven major Indic languages that includes | |
| Assamese (‘as’), Gujarat (‘gu’), Kannada (‘kn’), | |
| Malayalam (‘ml’), Marathi (‘mr’), Odia (‘or’), | |
| Punjabi (‘pa’), Tamil (‘ta’), Telugu (‘te’), Hindi | |
| (‘hi’), and Bengali (‘bn’). | |
| ### Supported Tasks and Leaderboards | |
| **Tasks:** Natural Language Inference | |
| **Leaderboards:** Currently there is no Leaderboard for this dataset. | |
| ### Languages | |
| - `Assamese (as)` | |
| - `Bengali (bn)` | |
| - `Gujarati (gu)` | |
| - `Kannada (kn)` | |
| - `Hindi (hi)` | |
| - `Malayalam (ml)` | |
| - `Marathi (mr)` | |
| - `Oriya (or)` | |
| - `Punjabi (pa)` | |
| - `Tamil (ta)` | |
| - `Telugu (te)` | |
| ## Dataset Structure | |
| ### Data Instances | |
| One example from the `hi` dataset is given below in JSON format. | |
| ```python | |
| {'premise': 'अवधारणात्मक रूप से क्रीम स्किमिंग के दो बुनियादी आयाम हैं-उत्पाद और भूगोल।', | |
| 'hypothesis': 'उत्पाद और भूगोल क्रीम स्किमिंग का काम करते हैं।', | |
| 'label': 1 (neutral) } | |
| ``` | |
| ### Data Fields | |
| - `premise (string)`: Premise Sentence | |
| - `hypothesis (string)`: Hypothesis Sentence | |
| - `label (integer)`: Integer label `0` if hypothesis `entails` the premise, `2` if hypothesis `negates` the premise and `1` otherwise. | |
| ### Data Splits | |
| <!-- Below is the dataset split given for `hi` dataset. | |
| ```python | |
| DatasetDict({ | |
| train: Dataset({ | |
| features: ['premise', 'hypothesis', 'label'], | |
| num_rows: 392702 | |
| }) | |
| test: Dataset({ | |
| features: ['premise', 'hypothesis', 'label'], | |
| num_rows: 5010 | |
| }) | |
| validation: Dataset({ | |
| features: ['premise', 'hypothesis', 'label'], | |
| num_rows: 2490 | |
| }) | |
| }) | |
| ``` --> | |
| Language | ISO 639-1 Code |Train | Test | Dev | | |
| --------------|----------------|-------|-----|------| | |
| Assamese | as | 392,702 | 5,010 | 2,490 | | |
| Bengali | bn | 392,702 | 5,010 | 2,490 | | |
| Gujarati | gu | 392,702 | 5,010 | 2,490 | | |
| Hindi | hi | 392,702 | 5,010 | 2,490 | | |
| Kannada | kn | 392,702 | 5,010 | 2,490 | | |
| Malayalam | ml |392,702 | 5,010 | 2,490 | | |
| Marathi | mr |392,702 | 5,010 | 2,490 | | |
| Oriya | or | 392,702 | 5,010 | 2,490 | | |
| Punjabi | pa | 392,702 | 5,010 | 2,490 | | |
| Tamil | ta | 392,702 | 5,010 | 2,490 | | |
| Telugu | te | 392,702 | 5,010 | 2,490 | | |
| <!-- The dataset split remains same across all languages. --> | |
| ## Dataset usage | |
| Code snippet for using the dataset using datasets library. | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset("Divyanshu/indicxnli") | |
| ``` | |
| ## Dataset Creation | |
| Machine translation of XNLI english dataset to 11 listed Indic Languages. | |
| ### Curation Rationale | |
| [More information needed] | |
| ### Source Data | |
| [XNLI dataset](https://cims.nyu.edu/~sbowman/xnli/) | |
| #### Initial Data Collection and Normalization | |
| [Detailed in the paper](https://arxiv.org/abs/2204.08776) | |
| #### Who are the source language producers? | |
| [Detailed in the paper](https://arxiv.org/abs/2204.08776) | |
| #### Human Verification Process | |
| [Detailed in the paper](https://arxiv.org/abs/2204.08776) | |
| ## Considerations for Using the Data | |
| ### Social Impact of Dataset | |
| [Detailed in the paper](https://arxiv.org/abs/2204.08776) | |
| ### Discussion of Biases | |
| [Detailed in the paper](https://arxiv.org/abs/2204.08776) | |
| ### Other Known Limitations | |
| [Detailed in the paper](https://arxiv.org/abs/2204.08776) | |
| ### Dataset Curators | |
| Divyanshu Aggarwal, Vivek Gupta, Anoop Kunchukuttan | |
| ### Licensing Information | |
| Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders. | |
| ### Citation Information | |
| If you use any of the datasets, models or code modules, please cite the following paper: | |
| ``` | |
| @misc{https://doi.org/10.48550/arxiv.2204.08776, | |
| doi = {10.48550/ARXIV.2204.08776}, | |
| url = {https://arxiv.org/abs/2204.08776}, | |
| author = {Aggarwal, Divyanshu and Gupta, Vivek and Kunchukuttan, Anoop}, | |
| keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, | |
| title = {IndicXNLI: Evaluating Multilingual Inference for Indian Languages}, | |
| publisher = {arXiv}, | |
| year = {2022}, | |
| copyright = {Creative Commons Attribution 4.0 International} | |
| } | |
| ``` | |
| <!-- ### Contributions --> | |