Daft is a high-performance data engine providing simple and reliable data processing for any modality and scale. Daft has native support for reading from and writing to Hugging Face datasets.

To get started, pip install daft with the huggingface feature:
pip install 'daft[huggingface]'Daft is able to read datasets directly from the Hugging Face Hub using the daft.read_huggingface() function or via the hf://datasets/ protocol.
Using daft.read_huggingface(), you can easily load a dataset.
import daft
df = daft.read_huggingface("username/dataset_name")This will read the entire dataset into a DataFrame.
Not only can you read entire datasets, but you can also read individual files from a dataset repository. Using a read function that takes in a path (such as daft.read_parquet(), daft.read_csv(), or daft.read_json()), specify a Hugging Face dataset path via the hf://datasets/ prefix:
import daft
# read a specific Parquet file
df = daft.read_parquet("hf://datasets/username/dataset_name/file_name.parquet")
# or a csv file
df = daft.read_csv("hf://datasets/username/dataset_name/file_name.csv")
# or a set of Parquet files using a glob pattern
df = daft.read_parquet("hf://datasets/username/dataset_name/**/*.parquet")Daft is able to write Parquet files to a Hugging Face dataset repository using daft.DataFrame.write_huggingface. Daft supports Content-Defined Chunking and Xet for faster, deduplicated writes.
Basic usage:
import daft
df: daft.DataFrame = ...
df.write_huggingface("username/dataset_name")See the DataFrame.write_huggingface API page for more info.
The token parameter in daft.io.HuggingFaceConfig can be used to specify a Hugging Face access token for requests that require authentication (e.g. reading private dataset repositories or writing to a dataset repository).
Example of loading a dataset with a specified token:
from daft.io import IOConfig, HuggingFaceConfig
io_config = IOConfig(hf=HuggingFaceConfig(token="your_token"))
df = daft.read_parquet("hf://datasets/username/dataset_name", io_config=io_config)