Datasets:
Improve dataset card: Add task categories, HF paper link, GitHub link, image, and notebook descriptions (#1)
Browse files- Improve dataset card: Add task categories, HF paper link, GitHub link, image, and notebook descriptions (8257614abb96e47002f03f14f93e06e073f695b2)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
|
@@ -1,63 +1,88 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
# CC30k Dataset
|
|
|
|
|
|
|
| 6 |
|
| 7 |
The CC30k dataset consists of labeled citation contexts obtained through crowdsourcing. Each context is labeled by three independent workers. This README describes the structure and columns of the dataset.
|
| 8 |
|
| 9 |
-
|
| 10 |
|
| 11 |
-
|
| 12 |
|
|
|
|
| 13 |
|
| 14 |
-
| **Column Name**
|
| 15 |
-
|
| 16 |
-
| `input_index`
|
| 17 |
-
| `input_context`
|
| 18 |
-
| `input_file_key`
|
| 19 |
-
| `input_first_author`
|
| 20 |
-
| `worker_id_w1`
|
| 21 |
| `work_time_in_seconds_w1` | Time (in seconds) the first worker took to label the citation context. |
|
| 22 |
-
| `worker_id_w2`
|
| 23 |
| `work_time_in_seconds_w2` | Time (in seconds) the second worker took to label the citation context. |
|
| 24 |
-
| `worker_id_w3`
|
| 25 |
| `work_time_in_seconds_w3` | Time (in seconds) the third worker took to label the citation context. |
|
| 26 |
-
| `label_w1`
|
| 27 |
-
| `label_w2`
|
| 28 |
-
| `label_w3`
|
| 29 |
-
| `batch`
|
| 30 |
-
| `majority_vote`
|
| 31 |
-
| `majority_agreement`
|
| 32 |
-
| `rs_doi`
|
| 33 |
-
| `rs_title`
|
| 34 |
-
| `rs_authors`
|
| 35 |
-
| `rs_year`
|
| 36 |
-
| `rs_venue`
|
| 37 |
-
| `rs_selected_claims`
|
| 38 |
| `rs_reproduced_claims` | Number of selected claims that were successfully reproduced (by manual inspection). |
|
| 39 |
-
| `reproducibility`
|
| 40 |
-
| `org_doi`
|
| 41 |
-
| `org_title`
|
| 42 |
-
| `org_authors`
|
| 43 |
-
| `org_year`
|
| 44 |
-
| `org_venue`
|
| 45 |
-
| `org_paper_url`
|
| 46 |
-
| `org_citations`
|
| 47 |
-
| `org_s2ga_id`
|
| 48 |
-
| `citing_doi`
|
| 49 |
-
| `citing_year`
|
| 50 |
-
| `citing_venue`
|
| 51 |
-
| `citing_title`
|
| 52 |
-
| `citing_authors`
|
| 53 |
-
| `citing_s2ga_id`
|
| 54 |
-
| `label_type`
|
| 55 |
|
|
|
|
| 56 |
|
|
|
|
| 57 |
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
## Citation ##
|
| 63 |
|
|
@@ -71,9 +96,4 @@ https://github.com/lamps-lab/CC30k/
|
|
| 71 |
primaryClass={cs.DL},
|
| 72 |
url={https://arxiv.org/abs/2511.07790},
|
| 73 |
}
|
| 74 |
-
```
|
| 75 |
-
|
| 76 |
-
```
|
| 77 |
-
Rochana R. Obadage
|
| 78 |
-
11/10/2025
|
| 79 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-sa-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-classification
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- sentiment-analysis
|
| 9 |
+
- scientific-literature
|
| 10 |
+
- reproducibility
|
| 11 |
+
- citation-analysis
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# CC30k: A Citation Contexts Dataset for Reproducibility-Oriented Sentiment Analysis
|
| 15 |
+
|
| 16 |
+
[Paper](https://huggingface.co/papers/2511.07790) | [Code](https://github.com/lamps-lab/CC30k)
|
| 17 |
|
| 18 |
The CC30k dataset consists of labeled citation contexts obtained through crowdsourcing. Each context is labeled by three independent workers. This README describes the structure and columns of the dataset.
|
| 19 |
|
| 20 |
+

|
| 21 |
|
| 22 |
+
## Dataset Description
|
| 23 |
|
| 24 |
+
The CC30k dataset is unique in its focus on **reproducibility-oriented sentiments (ROS)** within scientific literature. This introduces a novel approach to studying computational reproducibility by leveraging citation contexts, which are textual fragments in scientific papers that reference prior work. This dataset comprises 30,734 labeled citation contexts from scientific literature published at AI venues, each annotated with one of three ROS labels: `Positive`, `Negative`, or `Neutral`. These labels reflect the cited work's perceived reproducibility. The dataset contains ROS labeled contexts along with metadata about the workers, reproducibility study, related original paper, citing paper, the final aggregated labels and the label type. The columns in the dataset are detailed in the table below:
|
| 25 |
|
| 26 |
+
| **Column Name** | **Description** |
|
| 27 |
+
|---|---|
|
| 28 |
+
| `input_index` | Unique ID for each citation context. |
|
| 29 |
+
| `input_context` | Citation context that workers are asked to label. |
|
| 30 |
+
| `input_file_key` | Identifier linking the context to a rep-study. |
|
| 31 |
+
| `input_first_author` | Name or identifier of the first author of the cited paper. |
|
| 32 |
+
| `worker_id_w1` | Unique ID of the first worker who labeled this citation context. |
|
| 33 |
| `work_time_in_seconds_w1` | Time (in seconds) the first worker took to label the citation context. |
|
| 34 |
+
| `worker_id_w2` | Unique ID of the second worker who labeled this citation context. |
|
| 35 |
| `work_time_in_seconds_w2` | Time (in seconds) the second worker took to label the citation context. |
|
| 36 |
+
| `worker_id_w3` | Unique ID of the third worker who labeled this citation context. |
|
| 37 |
| `work_time_in_seconds_w3` | Time (in seconds) the third worker took to label the citation context. |
|
| 38 |
+
| `label_w1` | Label assigned by the first worker. |
|
| 39 |
+
| `label_w2` | Label assigned by the second worker. |
|
| 40 |
+
| `label_w3` | Label assigned by the third worker. |
|
| 41 |
+
| `batch` | Batch number for the posted Mechanical Turk job. |
|
| 42 |
+
| `majority_vote` | Final label based on the majority vote among workers’ labels (reproducibility-oriented sentiment: `Positive`, `Negative`, or `Neutral`). |
|
| 43 |
+
| `majority_agreement` | Indicates how many of the three workers agreed on the final majority vote. |
|
| 44 |
+
| `rs_doi` | Digital Object Identifier (DOI) of the reproducibility study paper. |
|
| 45 |
+
| `rs_title` | Title of the reproducibility study paper. |
|
| 46 |
+
| `rs_authors` | List of authors of the reproducibility study paper. |
|
| 47 |
+
| `rs_year` | Publication year of the reproducibility study paper. |
|
| 48 |
+
| `rs_venue` | Venue (conference or journal) where the reproducibility study was published. |
|
| 49 |
+
| `rs_selected_claims` | Number of claims selected from the original paper for reproducibility study (by manual inspection). |
|
| 50 |
| `rs_reproduced_claims` | Number of selected claims that were successfully reproduced (by manual inspection). |
|
| 51 |
+
| `reproducibility` | Final reproducibility label assigned to the original paper by manual inspection (*reproducible, not-reproducible, partially-reproducible* [if 0 < `rs_reproduced_claims` < `rs_selected_claims`]). |
|
| 52 |
+
| `org_doi` | DOI of the original (cited) paper that was assessed for reproducibility. |
|
| 53 |
+
| `org_title` | Title of the original (cited) paper. |
|
| 54 |
+
| `org_authors` | List of authors of the original (cited) paper. |
|
| 55 |
+
| `org_year` | Publication year of the original (cited) paper. |
|
| 56 |
+
| `org_venue` | Venue where the original (cited) paper was published. |
|
| 57 |
+
| `org_paper_url` | URL to access the original (cited) paper. |
|
| 58 |
+
| `org_citations` | Number of citations received by the original (cited) paper. |
|
| 59 |
+
| `org_s2ga_id` | Semantic Scholar Graph API ID of the original (cited) paper. |
|
| 60 |
+
| `citing_doi` | DOI of the citing paper that cited the original (cited) paper. |
|
| 61 |
+
| `citing_year` | Publication year of the citing paper. |
|
| 62 |
+
| `citing_venue` | Venue where the citing paper was published. |
|
| 63 |
+
| `citing_title` | Title of the citing paper. |
|
| 64 |
+
| `citing_authors` | List of authors of the citing paper. |
|
| 65 |
+
| `citing_s2ga_id` | Semantic Scholar Graph API ID of the citing paper. |
|
| 66 |
+
| `label_type` | Label source: `crowdsourced` or `augmented_human_validated` or `augmented_machine_labeled`. |
|
| 67 |
|
| 68 |
+
## Jupyter Notebook Descriptions
|
| 69 |
|
| 70 |
+
The GitHub repository's `notebooks` directory contains the following Jupyter notebooks, which were used to produce and analyze the dataset:
|
| 71 |
|
| 72 |
+
- **R001_AWS_Labelling_Dataset_Preprocessing_Mturk.ipynb**
|
| 73 |
+
- Used to pre-process data for Mechanical Turk (MTurk) labeling.
|
| 74 |
+
- **R001_AWS_MTurk_API.ipynb**
|
| 75 |
+
- Used to communicate with MTurk workers.
|
| 76 |
+
- **R001_AWS_MTurk_process_results.ipynb**
|
| 77 |
+
- Used to process crowdsourced results from MTurk.
|
| 78 |
+
- **R001_Extend_CC25k_Dataset.ipynb**
|
| 79 |
+
- Used to extend the crowdsourced labels with newly augmented ROS: Negative contexts.
|
| 80 |
+
- **R_001_Creating_the_RS_superset.ipynb**
|
| 81 |
+
- Used to collect the original and reproducibility studies.
|
| 82 |
+
- **R_001_Extract_Citing_Paper_Details.ipynb**
|
| 83 |
+
- Used to collect citing paper details and contexts using the Semantic Scholar Graph API (S2GA).
|
| 84 |
+
- **R001_MTurk_Sentiment_Analysis_5_models.ipynb**
|
| 85 |
+
- Generates the performance measures for the selected five open-source multiclass sentiment analysis models.
|
| 86 |
|
| 87 |
## Citation ##
|
| 88 |
|
|
|
|
| 96 |
primaryClass={cs.DL},
|
| 97 |
url={https://arxiv.org/abs/2511.07790},
|
| 98 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
```
|