nielsr HF Staff commited on
Commit
1fd5465
·
verified ·
1 Parent(s): 75bfee7

Improve dataset card: Add abstract, sample usage, and relevant tags

Browse files

This PR enhances the dataset card by adding the paper's abstract, providing a comprehensive overview of the research and the dataset's purpose.

A new "Sample Usage" section is included with a Python code snippet that demonstrates how to easily load and inspect the `test.jsonl` data, enabling users to quickly get started with the dataset.

Additionally, the metadata has been updated to include `llm` and `benchmark` tags, improving the dataset's discoverability and accurately reflecting its nature as a benchmark for large language models.

Files changed (1) hide show
  1. README.md +24 -0
README.md CHANGED
@@ -4,11 +4,17 @@ task_categories:
4
  - text-generation
5
  tags:
6
  - agent
 
 
7
  ---
 
8
  # Long Horizon Execution
9
 
10
  This project contains the dataset accompanying the paper "[The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs](https://arxiv.org/abs/2509.09677)"
11
 
 
 
 
12
  **GitHub:** [https://github.com/long-horizon-execution/measuring-execution/](https://github.com/long-horizon-execution/measuring-execution/)
13
 
14
  ## Description
@@ -28,6 +34,24 @@ The provided dataset is configured with a turn complexity of `K=1` (one key per
28
  - _"input"_: Concatenate every `N` items into a single comma-separated string.
29
  - _"output"_: The new running sum for the grouped turn is simply the **last** running sum from the original group of `N` turns.
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  ## Benchmark
32
  ![Benchmark of Frontier models.](benchmark.png)
33
 
 
4
  - text-generation
5
  tags:
6
  - agent
7
+ - llm
8
+ - benchmark
9
  ---
10
+
11
  # Long Horizon Execution
12
 
13
  This project contains the dataset accompanying the paper "[The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs](https://arxiv.org/abs/2509.09677)"
14
 
15
+ ## Abstract
16
+ Does continued scaling of large language models (LLMs) yield diminishing returns? Real-world value often stems from the length of task an agent can complete. We start this work by observing the simple but counterintuitive fact that marginal gains in single-step accuracy can compound into exponential improvements in the length of a task a model can successfully complete. Then, we argue that failures of LLMs when simple tasks are made longer arise from mistakes in execution, rather than an inability to reason. We propose isolating execution capability, by explicitly providing the knowledge and plan needed to solve a long-horizon task. We find that larger models can correctly execute significantly more turns even when small models have 100% single-turn accuracy. We observe that the per-step accuracy of models degrades as the number of steps increases. This is not just due to long-context limitations -- curiously, we observe a self-conditioning effect -- models become more likely to make mistakes when the context contains their errors from prior turns. Self-conditioning does not reduce by just scaling the model size. In contrast, recent thinking models do not self-condition, and can also execute much longer tasks in a single turn. We conclude by benchmarking frontier thinking models on the length of task they can execute in a single turn. Overall, by focusing on the ability to execute, we hope to reconcile debates on how LLMs can solve complex reasoning problems yet fail at simple tasks when made longer, and highlight the massive benefits of scaling model size and sequential test-time compute for long-horizon tasks.
17
+
18
  **GitHub:** [https://github.com/long-horizon-execution/measuring-execution/](https://github.com/long-horizon-execution/measuring-execution/)
19
 
20
  ## Description
 
34
  - _"input"_: Concatenate every `N` items into a single comma-separated string.
35
  - _"output"_: The new running sum for the grouped turn is simply the **last** running sum from the original group of `N` turns.
36
 
37
+ ## Sample Usage
38
+
39
+ To load and inspect the `test.jsonl` dataset:
40
+
41
+ ```python
42
+ import json
43
+
44
+ file_path = "test.jsonl"
45
+ samples = []
46
+ with open(file_path, 'r', encoding='utf-8') as f:
47
+ for line in f:
48
+ samples.append(json.loads(line))
49
+
50
+ print(f"Loaded {len(samples)} samples.")
51
+ print("First sample:")
52
+ print(samples[0])
53
+ ```
54
+
55
  ## Benchmark
56
  ![Benchmark of Frontier models.](benchmark.png)
57