LIMIT-small / README.md
orionweller's picture
Update README.md
de6e7e2 verified
|
raw
history blame
2.96 kB
metadata
license: cc-by-4.0
language: en
dataset_info:
  - config_name: default
    features:
      - name: query-id
        dtype: string
      - name: corpus-id
        dtype: string
      - name: score
        dtype: int64
    splits:
      - name: test
        num_examples: 2000
  - config_name: corpus
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: corpus
        num_examples: 46
  - config_name: queries
    features:
      - name: _id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: queries
        num_examples: 1000
configs:
  - config_name: default
    data_files:
      - split: test
        path: qrels.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: queries.jsonl
task_categories:
  - question-answering
size_categories:
  - n<1K

LIMIT-small

A retrieval dataset that exposes fundamental theoretical limitations of embedding-based retrieval models. Despite using simple queries like "Who likes Apples?", state-of-the-art embedding models achieve less than 20% recall@100 on LIMIT full and cannot solve LIMIT-small (46 docs).

Links

Dataset Details

Queries (1,000): Simple questions asking "Who likes [attribute]?"

  • Examples: "Who likes Quokkas?", "Who likes Joshua Trees?", "Who likes Disco Music?"

Corpus (46 documents): Short biographical texts describing people and their preferences

  • Format: "[Name] likes [attribute1] and [attribute2]."
  • Example: "Geneva Durben likes Quokkas and Apples."

Qrels (2,000): Each query has exactly 2 relevant documents (score=1), creating nearly all possible combinations of 2 documents from the 46 corpus documents (C(46,2) = 1,035 combinations).

Format

The dataset follows standard MTEB format with three configurations:

  • default: Query-document relevance judgments (qrels), keys: corpus-id, query-id, score (1 for relevant)
  • queries: Query texts with IDs , keys: _id, text
  • corpus: Document texts with IDs, keys: _id, title (empty), and text

Purpose

Tests whether embedding models can represent all top-k combinations of relevant documents, based on theoretical results connecting embedding dimension to representational capacity. Despite the simple nature of queries, state-of-the-art models struggle due to fundamental dimensional limitations.

Citation

@article{weller2025limit,
  title={On the Theoretical Limitations of Embedding-Based Retrieval},
  author={Weller, Orion and Boratko, Michael and Naim, Iftekhar and Lee, Jinhyuk},
  journal={arXiv preprint arXiv:TODO},
  year={2025}
}