File size: 2,471 Bytes
fc62f23
 
3cf6c67
 
 
 
 
 
 
 
 
 
 
 
 
fc62f23
 
 
 
 
 
 
 
 
4c2c31b
 
 
 
8f17c9d
 
fc62f23
 
8f17c9d
fc62f23
8f17c9d
 
fc62f23
3cf6c67
 
 
 
fc62f23
 
 
 
47a2ffe
 
fc62f23
47a2ffe
9d47405
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47a2ffe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d47405
 
 
47a2ffe
 
 
 
 
 
 
 
 
 
 
9d47405
 
 
47a2ffe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d47405
47a2ffe
9d47405
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
dataset_info:
- config_name: distractors
  features:
  - name: evidence
    dtype: string
  - name: evidence_id
    dtype: int64
  splits:
  - name: train
    num_bytes: 735316804
    num_examples: 500000
  download_size: 417661627
  dataset_size: 735316804
- config_name: test
  features:
  - name: claim
    dtype: string
  - name: evidence
    dtype: string
  - name: evidence_id
    dtype: int64
  - name: label
    dtype: string
  - name: evidences
    sequence: string
  - name: evidence_ids
    sequence: string
  - name: labels
    sequence: string
  splits:
  - name: train
    num_bytes: 1174731
    num_examples: 206
  download_size: 556372
  dataset_size: 1174731
configs:
- config_name: distractors
  data_files:
  - split: train
    path: distractors/train-*
- config_name: test
  data_files:
  - split: train
    path: test/train-*
size_categories:
- 100K<n<1M
---

## Data Stats

- 206 claims
- 500k distractors

## Data Structure

### Test

- claim
- evidence: GT evidence
- evidence_id: GT evidence id
- label: GT label
- evidences: list of all evidences
- evidence_ids: list of all evidence ids
- labels: list of all labels

### Distractors

- evidence
- evidence_id

## Process Code

```python
import pandas as pd

from datasets import Dataset

claims = pd.read_csv("./scifact_open_retriever_test.csv")
claims.head()

docs = pd.read_csv("./scifact_open_docs.csv")
docs.head()

id2doc = dict(zip(docs["ID"], docs["Doc"]))

data = {
    "claim": [],
    "evidence": [],
    "evidence_id": [],
    "label": [],
    "evidences": [],
    "evidence_ids": [],
    "labels": [],
}

for i, row in claims.iterrows():
    data["claim"].append(row["Query"])
    evidence_ids = eval(row["Gold"])
    evidence_id = int(evidence_ids[0])
    labels = eval(row["Label"])
    label = str(labels[0])
    data["evidence"].append(id2doc[evidence_id])
    data["evidence_id"].append(evidence_id)
    data["label"].append(label)
    data["evidences"].append([id2doc[int(eid)] for eid in evidence_ids])
    data["evidence_ids"].append(evidence_ids)
    data["labels"].append(labels)

ds = Dataset.from_dict(data)

distractors = {
    "evidence": [],
    "evidence_id": [],
}

for i, row in docs.iterrows():
    distractors["evidence"].append(row["Doc"])
    distractors["evidence_id"].append(row["ID"])

distractors = Dataset.from_dict(distractors)

distractors.push_to_hub("umbc-scify/scifact-open", "distractors")
ds.push_to_hub("umbc-scify/scifact-open", "test")

```