CP-DATASET / README.md
c00cjz00's picture
Update README.md
3158d32 verified
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
  - zh
tags:
  - llama-factory
size_categories:
  - 1K<n<10K
configs:
  - config_name: ld1-hq3
    data_files:
      - split: train
        path: ld1-hq3/*parquet
  - config_name: ld0
    data_files:
      - split: train
        path: ld0/*parquet
  - config_name: ld1
    data_files:
      - split: train
        path: ld1/*parquet
  - config_name: ld1-hq2
    data_files:
      - split: train
        path: ld1-hq2/*parquet
  - config_name: m
    data_files:
      - split: train
        path: m/*parquet
  - config_name: 'n'
    data_files:
      - split: train
        path: n/*parquet
  - config_name: o
    data_files:
      - split: train
        path: o/*parquet
  - config_name: p
    data_files:
      - split: train
        path: p/*parquet
  - config_name: q
    data_files:
      - split: train
        path: q/*parquet

This repository contains the pre-training dataset for TAIDE.

The different data groups are stored in their respective branches.

You can use the following code to load them:

from datasets import load_dataset

dataset = load_dataset(
  'TLLM/TAIDE-CP-Data',
  subset='<DATA_GROUP_NAME>',
  num_proc=32 # 32 cores for downloading & building dataset
)

* Note that there is overlap between data groups.