Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
Languages:
Korean
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed commited on
Commit
0fa0994
·
verified ·
1 Parent(s): c41b8d4

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +39 -29
README.md CHANGED
@@ -5,6 +5,8 @@ language:
5
  - kor
6
  license: cc-by-sa-4.0
7
  multilinguality: monolingual
 
 
8
  task_categories:
9
  - text-classification
10
  task_ids: []
@@ -45,6 +47,9 @@ this dataset is a processed and redistributed version of the KLUE-Ynat & KLUE-MR
45
  | Domains | News, Written |
46
  | Reference | https://huggingface.co/datasets/on-and-on/clustering_klue_mrc_ynat_title |
47
 
 
 
 
48
 
49
  ## How to evaluate on this task
50
 
@@ -53,15 +58,15 @@ You can evaluate an embedding model on this dataset using the following code:
53
  ```python
54
  import mteb
55
 
56
- task = mteb.get_tasks(["KlueYnatMrcCategoryClustering"])
57
- evaluator = mteb.MTEB(task)
58
 
59
  model = mteb.get_model(YOUR_MODEL)
60
  evaluator.run(model)
61
  ```
62
 
63
  <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
64
- To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
65
 
66
  ## Citation
67
 
@@ -90,7 +95,7 @@ If you use this dataset, please cite the dataset as well as [mteb](https://githu
90
  }
91
 
92
  @article{muennighoff2022mteb,
93
- author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
94
  title = {MTEB: Massive Text Embedding Benchmark},
95
  publisher = {arXiv},
96
  journal={arXiv preprint arXiv:2210.07316},
@@ -117,31 +122,36 @@ desc_stats = task.metadata.descriptive_stats
117
  ```json
118
  {
119
  "test": {
120
- "num_samples": 1,
121
- "number_of_characters": 904,
122
- "min_text_length": 904,
123
- "average_text_length": 904.0,
124
- "max_text_length": 904,
125
- "unique_texts": 904,
126
- "min_labels_per_text": 99,
127
- "average_labels_per_text": 904.0,
128
- "max_labels_per_text": 240,
129
- "unique_labels": 5,
130
- "labels": {
131
- "3": {
132
- "count": 173
133
- },
134
- "2": {
135
- "count": 164
136
- },
137
- "1": {
138
- "count": 99
139
- },
140
- "0": {
141
- "count": 240
142
- },
143
- "5": {
144
- "count": 228
 
 
 
 
 
145
  }
146
  }
147
  }
 
5
  - kor
6
  license: cc-by-sa-4.0
7
  multilinguality: monolingual
8
+ source_datasets:
9
+ - on-and-on/clustering_klue_mrc_ynat_title
10
  task_categories:
11
  - text-classification
12
  task_ids: []
 
47
  | Domains | News, Written |
48
  | Reference | https://huggingface.co/datasets/on-and-on/clustering_klue_mrc_ynat_title |
49
 
50
+ Source datasets:
51
+ - [on-and-on/clustering_klue_mrc_ynat_title](https://huggingface.co/datasets/on-and-on/clustering_klue_mrc_ynat_title)
52
+
53
 
54
  ## How to evaluate on this task
55
 
 
58
  ```python
59
  import mteb
60
 
61
+ task = mteb.get_task("KlueYnatMrcCategoryClustering")
62
+ evaluator = mteb.MTEB([task])
63
 
64
  model = mteb.get_model(YOUR_MODEL)
65
  evaluator.run(model)
66
  ```
67
 
68
  <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
69
+ To learn more about how to run models on `mteb` task check out the [GitHub repository](https://github.com/embeddings-benchmark/mteb).
70
 
71
  ## Citation
72
 
 
95
  }
96
 
97
  @article{muennighoff2022mteb,
98
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils},
99
  title = {MTEB: Massive Text Embedding Benchmark},
100
  publisher = {arXiv},
101
  journal={arXiv preprint arXiv:2210.07316},
 
122
  ```json
123
  {
124
  "test": {
125
+ "num_samples": 904,
126
+ "text_statistics": {
127
+ "total_text_length": 30703,
128
+ "min_text_length": 21,
129
+ "average_text_length": 33.96349557522124,
130
+ "max_text_length": 89,
131
+ "unique_texts": 904
132
+ },
133
+ "image_statistics": null,
134
+ "label_statistics": {
135
+ "min_labels_per_text": 1,
136
+ "average_label_per_text": 1.0,
137
+ "max_labels_per_text": 1,
138
+ "unique_labels": 5,
139
+ "labels": {
140
+ "3": {
141
+ "count": 173
142
+ },
143
+ "2": {
144
+ "count": 164
145
+ },
146
+ "1": {
147
+ "count": 99
148
+ },
149
+ "0": {
150
+ "count": 240
151
+ },
152
+ "5": {
153
+ "count": 228
154
+ }
155
  }
156
  }
157
  }