Update README.md
Browse files
README.md
CHANGED
|
@@ -54,4 +54,41 @@ size_categories:
|
|
| 54 |
|
| 55 |
# LIMIT-small
|
| 56 |
|
| 57 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
# LIMIT-small
|
| 56 |
|
| 57 |
+
A retrieval dataset that exposes fundamental theoretical limitations of embedding-based retrieval models. Despite using simple queries like "Who likes Apples?", state-of-the-art embedding models achieve less than 20% recall@100 on LIMIT full and cannot solve LIMIT-small (46 docs).
|
| 58 |
+
|
| 59 |
+
## Links
|
| 60 |
+
|
| 61 |
+
- **Paper**: [On the Theoretical Limitations of Embedding-Based Retrieval](TODO: add paper link)
|
| 62 |
+
- **Code**: [github.com/google-deepmind/limit](https://github.com/google-deepmind/limit)
|
| 63 |
+
- **Small version**: [LIMIT-small](https://huggingface.co/datasets/orionweller/LIMIT-small/) (46 documents only)
|
| 64 |
+
|
| 65 |
+
## Dataset Details
|
| 66 |
+
|
| 67 |
+
**Queries** (1,000): Simple questions asking "Who likes [attribute]?"
|
| 68 |
+
- Examples: "Who likes Quokkas?", "Who likes Joshua Trees?", "Who likes Disco Music?"
|
| 69 |
+
|
| 70 |
+
**Corpus** (46 documents): Short biographical texts describing people and their preferences
|
| 71 |
+
- Format: "[Name] likes [attribute1] and [attribute2]."
|
| 72 |
+
- Example: "Geneva Durben likes Quokkas and Apples."
|
| 73 |
+
|
| 74 |
+
**Qrels** (2,000): Each query has exactly 2 relevant documents (score=1), creating all possible combinations of 2 documents from the 46 corpus documents (C(46,2) = 1,035 combinations).
|
| 75 |
+
|
| 76 |
+
### Format
|
| 77 |
+
The dataset follows standard MTEB format with three configurations:
|
| 78 |
+
- `default`: Query-document relevance judgments (qrels), keys: `corpus-id`, `query-id`, `score` (1 for relevant)
|
| 79 |
+
- `queries`: Query texts with IDs , keys: `_id`, `text`
|
| 80 |
+
- `corpus`: Document texts with IDs, keys: `_id`, `title` (empty), and `text`
|
| 81 |
+
|
| 82 |
+
### Purpose
|
| 83 |
+
Tests whether embedding models can represent all top-k combinations of relevant documents, based on theoretical results connecting embedding dimension to representational capacity. Despite the simple nature of queries, state-of-the-art models struggle due to fundamental dimensional limitations.
|
| 84 |
+
|
| 85 |
+
## Citation
|
| 86 |
+
|
| 87 |
+
```bibtex
|
| 88 |
+
@article{weller2025limit,
|
| 89 |
+
title={On the Theoretical Limitations of Embedding-Based Retrieval},
|
| 90 |
+
author={Weller, Orion and Boratko, Michael and Naim, Iftekhar and Lee, Jinhyuk},
|
| 91 |
+
journal={arXiv preprint arXiv:TODO},
|
| 92 |
+
year={2025}
|
| 93 |
+
}
|
| 94 |
+
```"""
|