Improve dataset card: Add paper link, code, task category, and detailed description

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,3 +1,74 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - reinforcement-learning
9
+ - agents
10
+ - web-search
11
+ - LLM
12
+ - long-horizon
13
+ ---
14
+
15
+ # ASearcher-train-data: Training Data for Long-Horizon Agentic Search
16
+
17
+ This repository contains `ASearcher-train-data`, a large-scale, open-source training dataset integral to the ASearcher project. ASearcher is an open-source framework designed for large-scale online reinforcement learning (RL) training of search agents, aiming to advance Search Intelligence to expert-level performance.
18
+
19
+ The dataset is presented in the paper [Beyond Ten Turns: Unlocking Long-Horizon Agentic Search with Large-Scale Asynchronous RL](https://huggingface.co/papers/2508.07976). It is comprised of high-quality and challenging Question-Answering (QA) pairs, autonomously synthesized by a prompt-based LLM agent to enable agents to learn complex, long-horizon search strategies.
20
+
21
+ **Paper**: [https://huggingface.co/papers/2508.07976](https://huggingface.co/papers/2508.07976)
22
+ **Code**: [https://github.com/inclusionAI/AReaL](https://github.com/inclusionAI/AReaL)
23
+ **Project Page**: [https://inclusionai.github.io/AReaL/](https://inclusionai.github.io/AReaL/)
24
+ **Related Models**: [https://huggingface.co/collections/inclusionAI/asearcher-6891d8acad5ebc3a1e1fb2d1](https://huggingface.co/collections/inclusionAI/asearcher-6891d8acad5ebc3a1e1fb2d1)
25
+
26
+ ## Introduction
27
+
28
+ ASearcher is an open-source framework designed for large-scale online reinforcement learning (RL) training of search agents. Our mission is to advance Search Intelligence to expert-level performance. We are fully committed to open-source by releasing model weights, detailed training methodologies, and data synthesis pipelines. This dataset empowers developers to build their own high-performance search agents easily and cost-effectively.
29
+
30
+ ## Data Synthesis
31
+
32
+ The training data in this repository is generated using a prompt-based LLM agent designed to autonomously create grounded, challenging, and highly uncertain QA pairs. The synthesis process begins with basic questions, which the agent then iteratively refines through two key strategies:
33
+
34
+ * **Fuzzing**: Increasing uncertainty by obscuring key details in the query.
35
+ * **Context Injection**: Augmenting questions with external facts retrieved via tools to deepen complexity.
36
+
37
+ Each generated question undergoes rigorous multi-stage validation:
38
+
39
+ * **Quality Assurance**: Checks for fluency, timeliness, and logical coherence.
40
+ * **Difficulty Verification**: Compares answers generated by an LRM against ground truth to ensure challenge.
41
+ * **Answer Uniqueness Validation**: Confirms that incorrect LRM answers are indeed invalid, preserving question integrity.
42
+
43
+ ## Sample Usage
44
+
45
+ You can easily load this dataset using the Hugging Face `datasets` library:
46
+
47
+ ```python
48
+ from datasets import load_dataset
49
+
50
+ # Load the ASearcher training dataset
51
+ dataset = load_dataset("inclusionAI/ASearcher-train-data")
52
+
53
+ # Print the dataset structure
54
+ print(dataset)
55
+
56
+ # Access a sample (e.g., the first item in the 'train' split)
57
+ print(dataset["train"][0])
58
+ ```
59
+
60
+ ## Citation
61
+
62
+ If you find our work useful, please cite our paper:
63
+
64
+ ```bibtex
65
+ @misc{gao2025turnsunlockinglonghorizonagentic,
66
+ title={Beyond Ten Turns: Unlocking Long-Horizon Agentic Search with Large-Scale Asynchronous RL},
67
+ author={Jiaxuan Gao and Wei Fu and Minyang Xie and Shusheng Xu and Chuyi He and Zhiyu Mei and Banghua Zhu and Yi Wu},
68
+ year={2025},
69
+ eprint={2508.07976},
70
+ archivePrefix={arXiv},
71
+ primaryClass={cs.CL},
72
+ url={https://arxiv.org/abs/2508.07976},
73
+ }
74
+ ```