LeRobotDataset

LeRobotDataset is a standardized dataset format designed to address the specific needs of robot learning research. In the next few minutes, you’ll see what problems it solves, how it is organized, and where to look first when loading data.

The format provides unified, convenient access to robotics data across modalities, including sensorimotor readings, multiple camera feeds, and teleoperation status. LeRobotDataset also stores general information about the data being collected, including textual task descriptions, the type of robot used, and measurement specifics such as frames per second for both image and robot state streams, together with the types of cameras used, their resolution, and frame-rate.

Why a specialized format? Traditional ML datasets (like ImageNet) are simple: one image, one label. Robotics data is much more complex:

LeRobotDataset handles all this complexity seamlessly!

LeRobotDataset provides a unified interface for handling multi‑modal, time‑series data and integrates seamlessly with the PyTorch and Hugging Face ecosystems.

It is extensible and customizable, and already supports openly available data across a variety of embodiments in LeRobot, ranging from manipulator platforms like the SO‑100 and ALOHA‑2 to humanoid arms and hands, simulation‑based datasets, and even autonomous driving.

The format is built to be efficient for training and flexible enough to accommodate diverse data types, while promoting reproducibility and ease of use.

Item from dataset

The Dataset Class Design

You can read more about the design choices behind the design of our dataset class here. A core design choice behind LeRobotDataset is separating the underlying data storage from the user-facing API. This allows for efficient storage while presenting the data in an intuitive, ready-to-use format.

Think of it as two layers: a compact on‑disk layout for speed and scale, and a clean Python interface that yields ready‑to‑train tensors.

Datasets are always organized into three main components:

As you browse a dataset on disk, keep these three buckets in mind—they explain almost everything you’ll see.

For scalability, and to support datasets with potentially millions of trajectories (resulting in hundreds of millions or billions of individual camera frames), we merge data from different episodes into the same high-level structure.

Concretely, a single data file (stored with a parquet file) or recording (stored in MP4 format) often contains multiple episodes. This limits the number of files and speeds up I/O. The trade‑off is that metadata becomes the “map” that tells you where each episode begins and ends. In turn, metadata have a much more “relational” function, similar to how way in a relational database, shared keys allow to retrieve information from multiple tables.

An example structure for a given LeRobotDataset would appear as follows:

Reading guide: start with meta/info.json to understand the schema and fps; then inspect meta/stats.json for normalization; finally, peek at one file in data/ and videos/ to connect the dots.

Storage Efficiency: By concatenating episodes into larger files, LeRobotDataset avoids the “small files problem” that can slow down filesystems. A dataset with 1M episodes might have only hundreds of actual files on disk!

Pro Tip: The metadata files act like a database index, allowing fast access to specific episodes without loading entire video files.

References

For a full list of references, check out the tutorial.

Update on GitHub