File size: 5,263 Bytes
0565c26
 
 
 
 
bf0044b
0565c26
 
0e03966
0565c26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c64c3ba
0565c26
 
 
c64c3ba
0565c26
 
 
c64c3ba
0565c26
 
9c93f3d
0565c26
 
 
 
 
fb7010d
0565c26
 
 
 
1d84ea8
0565c26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
982aa11
0565c26
982aa11
 
 
 
 
 
 
 
fb7010d
0565c26
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
pretty_name: Australian Tax Guidance Retrieval
task_categories:
- text-retrieval
- question-answering
- text-ranking
tags:
- legal
- law
- tax
- australia
- markdown
source_datasets:
- ATO Community
language:
- en
language_details: en-AU
annotations_creators:
- expert-generated
language_creators:
- found
license: cc-by-4.0
size_categories:
- n<1K
dataset_info:
- config_name: default
  features:
  - name: query-id
    dtype: string
  - name: corpus-id
    dtype: string
  - name: score
    dtype: float64
  splits:
  - name: test
    num_examples: 112
- config_name: corpus
  features:
  - name: _id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: corpus
    num_examples: 105
- config_name: queries
  features:
  - name: _id
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: queries
    num_examples: 112
configs:
- config_name: default
  data_files:
  - split: test
    path: default.jsonl
- config_name: corpus
  data_files:
  - split: corpus
    path: corpus.jsonl
- config_name: queries
  data_files:
  - split: queries
    path: queries.jsonl
---
# Australian Tax Guidance Retrieval 🏦
**Australian Tax Guidance Retrieval** by [Isaacus](https://isaacus.com/) is a novel, diverse, and challenging legal information retrieval evaluation dataset consisting of 112 real-life Australian tax law questions paired with expert-annotated, relevant Australian Government tax guidance and policies.

Uniquely, this dataset sources its real-life tax questions from the posts of everyday Australian taxpayers on the [ATO Community forum](https://community.ato.gov.au/s/), with relevant Australian Government guidance and policy in turn being sourced from the answers of tax professionals and ATO employees.

The fact that questions center around substantive and often complex tax problems, which are broadly representative of the problems faced by everyday Australian taxpayers, makes this dataset extremely valuable for the robust evaluation of the legal retrieval capabilities and tax domain understanding of information retrieval models.

This dataset forms part of the [Massive Legal Embeddings Benchmark (MLEB)](https://isaacus.com/mleb), the largest, most diverse, and most comprehensive benchmark for legal text embedding models. 

## Structure 🗂️
As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.

The `default` split pairs questions (`query-id`) with relevant materials (`corpus-id`), each pair having a `score` of 1.

The `corpus` split contains Markdown-formatted Australian Government guidance and policies, with the text of such materials being stored in the `text` key and their IDs being stored in the `_id` key. There is also a `title` column which is deliberately set to an empty string in all cases for compatibility with the [`mteb`](https://github.com/embeddings-benchmark/mteb) library.

The `queries` split contains Markdown-formatted questions, with the text of a question being stored in the `text` key and its ID being stored in the `_id` key.

## Methodology 🧪
This dataset was constructed by:
1. For each of the 14 sub-topics of the [ATO Community forum](https://community.ato.gov.au/s/) that did not come under the parent topics 'Online Services' and 'Tax Professionals' (which were found to consist almost exclusively of practical questions around the use of ATO services rather than substantive tax law queries), selecting 8 questions that:
    1. Had at least one answer with at least one hyperlink (with, where there were multiple competing answers, the answer selected by the user as the best answer being used otherwise using the answers of ATO employees over those of tax professionals).
    2. Were about a substantive tax law problem and were not merely practical questions about, for example, the use of ATO services or how to file tax returns.
2. For each sampled question, visiting the hyperlink in the selected answer that appeared to be the most relevant to the question and then copying as much text from the hyperlink as appeared relevant to the question, ranging from a single paragraph to the entire document.
3. Using a purpose-built Chrome extension to extract questions and relevant passages directly to Markdown to preserve the semantics of added markup.
4. Lightly cleaning queries and passages by replacing consecutive sequences of at least two newlines with two consecutive newlines and removing leading and trailing whitespace.

## License 📜
This dataset is licensed under [CC BY 4.0](https://choosealicense.com/licenses/cc-by-4.0/) which allows for both non-commercial and commercial use of this dataset as long as appropriate attribution is made to it.

## Citation 🔖
If you use this dataset, please cite the [Massive Legal Embeddings Benchmark (MLEB)](https://arxiv.org/abs/2510.19365):
```bibtex
@misc{butler2025massivelegalembeddingbenchmark,
      title={The Massive Legal Embedding Benchmark (MLEB)}, 
      author={Umar Butler and Abdur-Rahman Butler and Adrian Lucas Malec},
      year={2025},
      eprint={2510.19365},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.19365}, 
}
```