Improve dataset card: update task category, license, add tags and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +46 -9
README.md CHANGED
@@ -1,6 +1,14 @@
1
  ---
2
- license: apache-2.0
3
  language: en
 
 
 
 
 
 
 
 
 
4
  dataset_info:
5
  - config_name: default
6
  features:
@@ -46,22 +54,22 @@ configs:
46
  data_files:
47
  - split: queries
48
  path: queries.jsonl
49
- task_categories:
50
- - question-answering
51
- size_categories:
52
- - n<1K
53
  ---
54
 
55
  # LIMIT-small
56
 
57
  A retrieval dataset that exposes fundamental theoretical limitations of embedding-based retrieval models. Despite using simple queries like "Who likes Apples?", state-of-the-art embedding models achieve less than 20% recall@100 on LIMIT full and cannot solve LIMIT-small (46 docs).
58
 
 
 
 
 
59
  ## Links
60
 
61
- - **Paper**: [On the Theoretical Limitations of Embedding-Based Retrieval](https://arxiv.org/abs/2508.21038)
62
- - **Code**: [github.com/google-deepmind/limit](https://github.com/google-deepmind/limit)
63
- - **Full version**: [LIMIT](https://huggingface.co/datasets/orionweller/LIMIT/) (50k documents)
64
- - **Small version**: [LIMIT-small](https://huggingface.co/datasets/orionweller/LIMIT-small/) (46 documents only)
65
 
66
  ## Dataset Details
67
 
@@ -83,6 +91,35 @@ The dataset follows standard MTEB format with three configurations:
83
  ### Purpose
84
  Tests whether embedding models can represent all top-k combinations of relevant documents, based on theoretical results connecting embedding dimension to representational capacity. Despite the simple nature of queries, state-of-the-art models struggle due to fundamental dimensional limitations.
85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
  ## Citation
87
 
88
  ```bibtex
 
1
  ---
 
2
  language: en
3
+ license: cc-by-4.0
4
+ size_categories:
5
+ - n<1K
6
+ task_categories:
7
+ - text-ranking
8
+ tags:
9
+ - retrieval
10
+ - embeddings
11
+ - theoretical-limitations
12
  dataset_info:
13
  - config_name: default
14
  features:
 
54
  data_files:
55
  - split: queries
56
  path: queries.jsonl
 
 
 
 
57
  ---
58
 
59
  # LIMIT-small
60
 
61
  A retrieval dataset that exposes fundamental theoretical limitations of embedding-based retrieval models. Despite using simple queries like "Who likes Apples?", state-of-the-art embedding models achieve less than 20% recall@100 on LIMIT full and cannot solve LIMIT-small (46 docs).
62
 
63
+ ## Introduction
64
+
65
+ Vector embeddings have been tasked with an ever-increasing set of retrieval tasks over the years, with a nascent rise in using them for reasoning, instruction-following, coding, and more. These new benchmarks push embeddings to work for any query and any notion of relevance that could be given. While prior works have pointed out theoretical limitations of vector embeddings, there is a common assumption that these difficulties are exclusively due to unrealistic queries, and those that are not can be overcome with better training data and larger models. In this work, we demonstrate that we may encounter these theoretical limitations in realistic settings with extremely simple queries. We connect known results in learning theory, showing that the number of top-k subsets of documents capable of being returned as the result of some query is limited by the dimension of the embedding. We empirically show that this holds true even if we restrict to k=2, and directly optimize on the test set with free parameterized embeddings. We then create a realistic dataset called LIMIT that stress tests models based on these theoretical results, and observe that even state-of-the-art models fail on this dataset despite the simple nature of the task. Our work shows the limits of embedding models under the existing single vector paradigm and calls for future research to develop methods that can resolve this fundamental limitation.
66
+
67
  ## Links
68
 
69
+ - **Paper**: [On the Theoretical Limitations of Embedding-Based Retrieval](https://arxiv.org/abs/2508.21038)
70
+ - **Code**: [github.com/google-deepmind/limit](https://github.com/google-deepmind/limit)
71
+ - **Full version**: [LIMIT](https://huggingface.co/datasets/orionweller/LIMIT/) (50k documents)
72
+ - **Small version**: [LIMIT-small](https://huggingface.co/datasets/orionweller/LIMIT-small/) (46 documents only)
73
 
74
  ## Dataset Details
75
 
 
91
  ### Purpose
92
  Tests whether embedding models can represent all top-k combinations of relevant documents, based on theoretical results connecting embedding dimension to representational capacity. Despite the simple nature of queries, state-of-the-art models struggle due to fundamental dimensional limitations.
93
 
94
+ ## Sample Usage
95
+
96
+ ### Loading with Hugging Face Datasets
97
+ You can also load the data using the `datasets` library from Hugging Face:
98
+ ```python
99
+ from datasets import load_dataset
100
+ ds = load_dataset("orionweller/LIMIT-small", "corpus") # also available: queries, test (contains qrels).
101
+ ```
102
+
103
+ ### Evaluation with MTEB
104
+ Evaluation was done using the [MTEB framework](https://github.com/embeddings-benchmark/mteb) on the [v2.0.0 branch](https://github.com/embeddings-benchmark/mteb/tree/v2.0.0) (soon to be `main`). An example is:
105
+
106
+ ```python
107
+ import mteb
108
+ from sentence_transformers import SentenceTransformer
109
+
110
+ model_name = "sentence-transformers/all-MiniLM-L6-v2"
111
+
112
+ # load the model using MTEB
113
+ model = mteb.get_model(model_name) # will default to SentenceTransformers(model_name) if not implemented in MTEB
114
+ # or using SentenceTransformers
115
+ model = SentenceTransformer(model_name)
116
+
117
+ # select the desired tasks and evaluate
118
+ tasks = mteb.get_tasks(tasks=["LIMITSmallRetrieval"]) # or use LIMITRetrieval for the full dataset
119
+ results = mteb.evaluate(model, tasks=tasks)
120
+ print(results)
121
+ ```
122
+
123
  ## Citation
124
 
125
  ```bibtex