Update README.md
Browse files
README.md
CHANGED
|
@@ -6,22 +6,26 @@ license: odc-by
|
|
| 6 |
|
| 7 |
<!-- Provide a quick summary of the dataset. -->
|
| 8 |
|
| 9 |
-
Zyda2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and
|
|
|
|
|
|
|
| 10 |
|
| 11 |
An early version of Zyda2 was used as the primary dataset for phase 1 pretraining of our Zamba2 series [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda2 as a pretraining dataset.
|
| 12 |
|
| 13 |
-
|
|
|
|
| 14 |
|
| 15 |
-
|
| 16 |
|
|
|
|
| 17 |
|
| 18 |
## How to download
|
| 19 |
|
| 20 |
-
// TODO
|
| 21 |
|
| 22 |
## Breakdown by component
|
| 23 |
|
| 24 |
-
// TODO
|
| 25 |
|
| 26 |
### Dataset Description
|
| 27 |
|
|
@@ -47,7 +51,7 @@ Dataset fields:
|
|
| 47 |
|
| 48 |
### Source Data
|
| 49 |
|
| 50 |
-
Zyda2 is comprised of four high quality open-source datasets
|
| 51 |
|
| 52 |
Zyda1: https://huggingface.co/datasets/Zyphra/Zyda
|
| 53 |
|
|
@@ -60,18 +64,6 @@ FineWeb-Edu-2 https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu
|
|
| 60 |
|
| 61 |
// Pie chart of composition -- YURY!
|
| 62 |
|
| 63 |
-
#### Data Collection and Processing
|
| 64 |
-
|
| 65 |
-
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
| 66 |
-
|
| 67 |
-
Zyda was created using a two stage post-processing pipeline consisting of *filtering* and *deduplication*.
|
| 68 |
-
|
| 69 |
-
For the filtering stage, we utilized a set of hand-crafted and tuned filters derived from a number of sources such as C4, RedPajama, and Gopher, in addition to our own filters.
|
| 70 |
-
|
| 71 |
-
For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13-grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4.
|
| 72 |
-
|
| 73 |
-
For full details on our data processing, see the [Zyda technical report](https://arxiv.org/abs/2406.01981) and our [dataset processing code](https://github.com/Zyphra/Zyda_processing).
|
| 74 |
-
|
| 75 |
|
| 76 |
#### Personal and Sensitive Information
|
| 77 |
|
|
|
|
| 6 |
|
| 7 |
<!-- Provide a quick summary of the dataset. -->
|
| 8 |
|
| 9 |
+
Zyda2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
|
| 10 |
+
|
| 11 |
+
To construct Zyda2, we took the best open-source datasets available: Zyda, FineWeb, DCLM, Dolma. Models trained on Zyda2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda2 outperforms all its constituent datasets in resulting model quality.
|
| 12 |
|
| 13 |
An early version of Zyda2 was used as the primary dataset for phase 1 pretraining of our Zamba2 series [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda2 as a pretraining dataset.
|
| 14 |
|
| 15 |
+
According to our evaluations, Zyda2 is the most performant per-token open dataset available. Zyda2 excels at educational and natural language reasoning content. For code performance, we reccomend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder).
|
| 16 |
+
|
| 17 |
|
| 18 |
+
// TODO Ablation scores key plots
|
| 19 |
|
| 20 |
+
For more information, please see our technical blog (-/TODO LINK)
|
| 21 |
|
| 22 |
## How to download
|
| 23 |
|
| 24 |
+
// TODO YURY
|
| 25 |
|
| 26 |
## Breakdown by component
|
| 27 |
|
| 28 |
+
// TODO YURY
|
| 29 |
|
| 30 |
### Dataset Description
|
| 31 |
|
|
|
|
| 51 |
|
| 52 |
### Source Data
|
| 53 |
|
| 54 |
+
Zyda2 is comprised of four high quality open-source datasets:
|
| 55 |
|
| 56 |
Zyda1: https://huggingface.co/datasets/Zyphra/Zyda
|
| 57 |
|
|
|
|
| 64 |
|
| 65 |
// Pie chart of composition -- YURY!
|
| 66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
#### Personal and Sensitive Information
|
| 69 |
|