Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -12,8 +12,10 @@ pretty_name: Creative Commons Common Crawl
|
|
| 12 |
This dataset contains text from 52 Common Crawl snapshots, covering about half of Common Crawl snapshots available to date and covering all years of operations of Common Crawl up to 2024.
|
| 13 |
We found a higher level of duplication across this collection, suggesting that including more snapshots would lead to a modest increase in total token yield.
|
| 14 |
From these snapshots, we extract HTML content using [FastWarc](https://arxiv.org/abs/2112.03103).
|
| 15 |
-
Then, using a regular expression adapted from [the C4Corpus project](https://aclanthology.org/L16-1146/)
|
| 16 |
-
To ensure license accuracy, we manually verified the top 1000 domains by content volume, retaining only the 537 domains with confirmed licenses where the Creative Commons designation applied to the all text content rather than embedded media or a subset of the text on the domain.
|
|
|
|
|
|
|
| 17 |
We extract the main content of these documents and remove boilerplate using [Resiliparse](https://github.com/chatnoir-eu/chatnoir-resiliparse).
|
| 18 |
We perform URL-level exact deduplication and use Bloom filters to remove near-duplicates with 80% ngram overlap.
|
| 19 |
We also employ rule-based filters matching [Dolma](https://arxiv.org/abs/2402.00159);
|
|
|
|
| 12 |
This dataset contains text from 52 Common Crawl snapshots, covering about half of Common Crawl snapshots available to date and covering all years of operations of Common Crawl up to 2024.
|
| 13 |
We found a higher level of duplication across this collection, suggesting that including more snapshots would lead to a modest increase in total token yield.
|
| 14 |
From these snapshots, we extract HTML content using [FastWarc](https://arxiv.org/abs/2112.03103).
|
| 15 |
+
Then, using a regular expression adapted from [the C4Corpus project](https://aclanthology.org/L16-1146/).
|
| 16 |
+
To ensure license accuracy, we manually verified the top 1000 domains by content volume, retaining only the 537 domains with confirmed licenses where the Creative Commons designation applied to the all text content rather than embedded media or a subset of the text on the domain.
|
| 17 |
+
As an additional check, we did a second round of annotations with the assistance of OpenAI's o3 model. Specifically, we instructed the model to examine each web domain and identify the ones that were openly licensed. We then had a second team manually annotate the cases where the AI does not approve of the domain but the original human auditor did. This resulted in **todo** domains being removed.
|
| 18 |
+
|
| 19 |
We extract the main content of these documents and remove boilerplate using [Resiliparse](https://github.com/chatnoir-eu/chatnoir-resiliparse).
|
| 20 |
We perform URL-level exact deduplication and use Bloom filters to remove near-duplicates with 80% ngram overlap.
|
| 21 |
We also employ rule-based filters matching [Dolma](https://arxiv.org/abs/2402.00159);
|