c2 / README.md
blue-blue's picture
Update README.md
4c27964 verified
metadata
tags:
  - pretraining
  - web

Common Crawl WET Dataset - c2

This repository contains a large-scale filtered dataset derived from the WET files of the Common Crawl project. The data is cleaned and aggregated to facilitate large-scale natural language processing tasks, especially the pretraining of large language models (LLMs).

Dataset Description

  • Source: Common Crawl CC-MAIN-2025-38, September 2025 crawl.
  • Data Type: Extracted plaintext from web crawl WET files with aggressive metadata and boilerplate filtering.
  • File Size: Large combined files (~15GB each) to balance upload size and storage constraints.
  • Preprocessing: Streamed extraction, metadata removal, filtering out boilerplate and duplicate content.
  • Purpose: Primarily designed for pretraining foundation models and LLMs requiring diverse, massive-scale natural language corpora.

Features

  • Optimized for Pretraining:
    The dataset is curated and filtered to be suitable for training large language models. It contains clean, high-quality textual data ideal for unsupervised pretraining tasks like masked language modeling or autoregressive modeling.

  • Large Scale:
    Contains processed data amounting to multiple terabytes, allowing training on a broad, diverse text corpus representing a wide range of domains.

  • Streaming Processing:
    The data was processed in a memory-efficient, streaming manner to support large-scale data handling without requiring excessive resources.

  • Metadata Cleaning:
    Extensive removal of WARC, HTTP headers, and other metadata ensures minimal noise in the text used for training.

  • Resume and Verify:
    Processing is checkpointed for fault tolerance. Uploaded files are verified on Hugging Face to avoid duplicates.

  • Immediate Uploads:
    Files are uploaded to Hugging Face immediately after hitting the 15GB size limit to respect limited storage constraints.

💻 Usage

Load the dataset easily using the datasets library:

from datasets import load_dataset

dataset = load_dataset("blue-blue/c2")

# Example: Access the first sample
print(dataset["train"][0])

After loading, you can iterate over text samples for pretraining models like GPT, BERT, or other large language architectures.

Pretraining Applications

  • Foundation Model Development:
    Provides diverse, large-scale text data crucial for training high-quality foundation LLMs.

  • Language Modeling Tasks:
    Suitable for autoregressive or masked language model pretraining due to extensive scale and quality.

  • Downstream Adaptation:
    Can be combined with other specialized datasets for fine-tuning or adaptation tasks.

  • Research & Benchmarking:
    Acts as a standard large-scale corpus for benchmarking NLP algorithms and analyzing language model behavior.

Contact

For questions, support, or collaboration:

[email protected]


Thank you for exploring the c2 dataset — a foundational resource for large-scale language modeling and NLP research.

⚠️ Note

This dataset is in update mode — it is continuously expanding and improving as new Common Crawl snapshots are processed and added.
Expect regular additions, refinements, and enhanced cleaning over time.