# Robotics-Course

## Docs

- [LeRobotDataset](https://huggingface.co/learn/robotics-course/unit1/3.md)
- [Introduction to Robot Learning](https://huggingface.co/learn/robotics-course/unit1/1.md)
- [LeRobot: An End-to-End Robot Learning Library](https://huggingface.co/learn/robotics-course/unit1/2.md)
- [Code Example: Datasets, in practice](https://huggingface.co/learn/robotics-course/unit1/4.md)
- [Understanding Robot Kinematics](https://huggingface.co/learn/robotics-course/unit2/3.md)
- [Classical Robotics](https://huggingface.co/learn/robotics-course/unit2/1.md)
- [Types of Robot Motion](https://huggingface.co/learn/robotics-course/unit2/2.md)
- [From Classical to Learning-Based Robotics](https://huggingface.co/learn/robotics-course/unit2/5.md)
- [Control Systems and Their Limitations](https://huggingface.co/learn/robotics-course/unit2/4.md)
- [Welcome to the 🤗 Robotics Course](https://huggingface.co/learn/robotics-course/unit0/1.md)
- [Getting Started with LeRobot](https://huggingface.co/learn/robotics-course/unit0/2.md)

### LeRobotDataset
https://huggingface.co/learn/robotics-course/unit1/3.md

# LeRobotDataset

LeRobotDataset is a standardized dataset format designed to address the specific needs of robot learning research. In the next few minutes, you’ll see what problems it solves, how it is organized, and where to look first when loading data.

The format provides unified, convenient access to robotics data across modalities, including sensorimotor readings, multiple camera feeds, and teleoperation status. LeRobotDataset also stores general information about the data being collected, including textual task descriptions, the type of robot used, and measurement specifics such as frames per second for both image and robot state streams, together with the types of cameras used, their resolution, and frame-rate.

> [!TIP]
> **Why a specialized format?** Traditional ML datasets (like ImageNet) are simple: one image, one label. Robotics data is much more complex:
> - **Multi-modal**: Images + sensor readings + actions, all synchronized
> - **Temporal**: Both observations and actions are recorded over time, and very much in a sequential manner
> - **Episodic**: Data is organized in trajectories/episodes
> - **High-dimensional**: Multiple camera views (i.e., multiple images), joint states, forces, etc.
>
> LeRobotDataset handles all this complexity seamlessly!

LeRobotDataset provides a unified interface for handling multi‑modal, time‑series data and integrates seamlessly with the PyTorch and Hugging Face ecosystems.

It is extensible and customizable, and already supports openly available data across a variety of embodiments in LeRobot, ranging from manipulator platforms like the SO‑100 and ALOHA‑2 to humanoid arms and hands, simulation‑based datasets, and even autonomous driving.

The format is built to be efficient for training and flexible enough to accommodate diverse data types, while promoting reproducibility and ease of use.

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robotics-course/item-from-dataset.png" alt="Item from dataset">

## The Dataset Class Design

You can read more about the design choices behind the design of our dataset class [here](https://huggingface.co/blog/lerobot-datasets-v3).
A core design choice behind LeRobotDataset is separating the underlying data storage from the user-facing API. This allows for efficient storage while presenting the data in an intuitive, ready-to-use format.

Think of it as two layers: a compact on‑disk layout for speed and scale, and a clean Python interface that yields ready‑to‑train tensors.

Datasets are always organized into three main components:

- **Tabular Data**: Low-dimensional, high-frequency data such as joint states, and actions are stored in efficient memory-mapped files, and typically offloaded to the more mature `datasets` library by Hugging Face, providing fast with limited memory consumption.
- **Visual Data**: To handle large volumes of camera data, frames are concatenated and encoded into MP4 files. Frames from the same episode are always grouped together into the same video, and multiple videos are grouped together by camera. To reduce stress on the file system, groups of videos for the same camera view are also broke into multiple sub-directories.
- **Metadata**: A collection of JSON files which describes the dataset's structure in terms of its metadata, serving as the relational counterpart to both the tabular and visual dimensions of data. Metadata include the different feature schema, frame rates, normalization statistics, and episode boundaries.

As you browse a dataset on disk, keep these three buckets in mind—they explain almost everything you’ll see.


For scalability, and to support datasets with potentially millions of trajectories (resulting in hundreds of millions or billions of individual camera frames), we merge data from different episodes into the same high-level structure.

Concretely, a single data file (stored with a parquet file) or recording (stored in MP4 format) often contains multiple episodes. This limits the number of files and speeds up I/O. The trade‑off is that metadata becomes the “map” that tells you where each episode begins and ends. In turn, metadata have a much more "relational" function, similar to how way in a relational database, shared keys allow to retrieve information from multiple tables.

An example structure for a given LeRobotDataset would appear as follows:

- `meta/info.json`: This metadata is a central metadata file. It contains the complete dataset schema, defining all features (e.g., `observation.state`, `action`), their shapes, and data types. It also stores crucial information like the dataset's frames-per-second (`fps`), LeRobot's version at the time of capture, and the path templates used to locate data and video files.
- `meta/stats.json`: This file stores aggregated statistics (mean, std, min, max) for each feature across the entire dataset, used for data normalization for most policy models and accessible externally via `dataset.meta.stats`.
- `meta/tasks.jsonl`: This file contains the mapping from natural language task descriptions to integer task indices, which are useful for task-conditioned policy training.
- `meta/episodes/*`: This directory contains metadata about each individual episode, such as its length, the corresponding task, and pointers to where its data is stored in the dataset's files. For scalability, this information is stored in files rather than a single large JSON file.
- `data/*`: Contains the core frame-by-frame tabular data, using parquet files to allow for fast, memory-mapped access. To improve performance and handle large datasets, data from multiple episodes are concatenated into larger files. These files are organized into chunked subdirectories to keep the size of directories manageable. A single file typically contains data for more than one single episode.
- `videos/*`: Contains the MP4 video files for all visual observation streams. Similar to the `data/` directory, the video footage from multiple episodes is concatenated into single MP4 files. This strategy significantly reduces the number of files in the dataset, which is more efficient for modern filesystems.

Reading guide: start with `meta/info.json` to understand the schema and fps; then inspect `meta/stats.json` for normalization; finally, peek at one file in `data/` and `videos/` to connect the dots.

> [!TIP]
> **Storage Efficiency:** By concatenating episodes into larger files, LeRobotDataset avoids the "small files problem" that can slow down filesystems. A dataset with 1M episodes might have only hundreds of actual files on disk!
>
> **Pro Tip:** The metadata files act like a database index, allowing fast access to specific episodes without loading entire video files.

## References

For a full list of references, check out the [tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial).

- **Implicit Behavioral Cloning** (2022)  
  Pete Florence et al.  
  This paper introduces energy-based models for behavioral cloning, demonstrating how implicit models can handle multi-modal action distributions more effectively than explicit models—a key consideration when designing dataset formats for robot learning.  
  [Paper (CoRL 2022)](https://proceedings.mlr.press/v164/florence22a.html)

- **A Dataset for Interactive Vision-Language Navigation with Unknown Command Feasibility** (2022)  
  Various Authors  
  An example of how specialized dataset formats enable new capabilities in robot learning, particularly for handling multi-modal sensory data and episodic structure.  
  [arXiv:2202.02312](https://huggingface.co/papers/2202.02312)


<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit1/3.mdx" />

### Introduction to Robot Learning
https://huggingface.co/learn/robotics-course/unit1/1.md

# Introduction to Robot Learning

Robot learning starts with a simple idea: teach robots to improve from data and experience instead of hand‑coding every behavior. In practice, that means using examples (videos, sensor data, demonstrations) and feedback to help a robot get better at tasks like picking, placing, pushing, or walking.

Why now? Two trends make this possible: modern machine learning is increasingly good at finding intricate patterns, and robotics datasets are becoming easier to collect and share. Together, they let us move from classical "write the physics and the controller" approaches to learning‑based methods that adapt from data.

For example, a robot arm can learn to grasp a block by trying actions and getting rewards for progress (reinforcement learning), or even by watching and imitating human demonstrations (imitation learning). Over time, the same ideas scale to many tasks and even different robot bodies.

> [!TIP]
> By the end of this unit, you will:
> - Understand in plain terms what "robot learning" means and why it's useful
> - See concrete examples of learning from demonstrations and from trial‑and‑error
> - Know what we'll build toward in the next units and how LeRobot fits in

# Some History

Robotics is about helping humans with repetitive, tiring or dangerous tasks. People have been working on this challenge since the 1950s. Recently, advances in machine learning have opened up new ways to build robots. Instead of requiring human experts to write detailed instructions and models for every task, we can now use large amounts of data and computation to help robots learn behaviors on their own.

> [!TIP]
> Some context...
>
> The 1950s saw the birth of both artificial intelligence and robotics as distinct fields. The first robot ever was [Unimate](https://en.wikipedia.org/wiki/Unimate), invented in 1961. It's taken nearly 70 years for these fields to converge in meaningful ways through robot learning!

# The Future of Robotics

Today's robotics researchers are moving away from the traditional approach of writing detailed models and control systems. Instead, they're embracing machine learning to create robots that can:

- Learn direct connections from what they see to what they do, without needing separate systems for perception and control.
- Extract useful information from many types of sensors (cameras, touch sensors, microphones) using data-driven methods.
- Work effectively without needing perfect models of how the world behaves.
- Take advantage of the growing number of open robotics datasets that anyone can access and learn from.
- Work effectively without needing perfect models of how the world behaves.

You can watch [this video](https://www.youtube.com/watch?v=VEs1QYEgOQo) to get a better sense of the paradigm shift currently undergoing in robotics.

> [!WARNING]
> **Key Insight:** This shift represents a fundamental change in how we think about robotics - from engineering precise solutions to learning adaptive behaviors from data.

This trend is especially important because it mirrors how foundation models like GPT and CLIP were developed. These machine learning, or Artificial Intelligence, systems learned to understand and work with text and images by training on massive datasets. As robotics datasets grow larger and robots get equipped with more diverse sensors (regular cameras, infrared cameras, LIDAR, microphones), applying similar learning approaches to robotics becomes increasingly powerful.

Robotics naturally requires expertise in both software and hardware. Adding machine learning to the mix means robotics practitioners need an even broader set of skills, which raises the bar for both research and real-world applications.

## References

For a full list of references, check out the [tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial).

- **ALVINN: An Autonomous Land Vehicle in a Neural Network** (1988)  
  Dean A. Pomerleau  
  This seminal paper introduced one of the first demonstrations of end-to-end learning for autonomous driving, where a neural network learned to map sensor inputs directly to steering commands.  
  [Paper](https://proceedings.neurips.cc/paper_files/paper/1988/file/812b4ba287f5ee0bc9d43bbf5bbe87fb-Paper.pdf)

- **Reinforcement Learning: An Introduction** (2018)  
  Richard S. Sutton and Andrew G. Barto  
  The foundational textbook on reinforcement learning, covering the principles that underlie how robots learn through trial and error.  
  [Book Website](http://incompleteideas.net/book/the-book-2nd.html)


<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit1/1.mdx" />

### LeRobot: An End-to-End Robot Learning Library
https://huggingface.co/learn/robotics-course/unit1/2.md

# LeRobot: An End-to-End Robot Learning Library

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch1/ch1-lerobot-figure1.png" alt="LeRobot Library Overview" style="width: 100%;" />

Now that we've learned some history, let's explore the main Python library we'll be using throughout this course: LeRobot.

LeRobot is an open-source library for robotics developed by Hugging Face. Think of it as a complete toolkit that handles everything from controlling real robots to training advanced learning algorithms, all in one place using PyTorch.

What makes LeRobot special is that it's "vertically integrated." This means it provides a unified way to work with real robots, handle complex multi-modal data (like combining camera feeds with sensor readings), and integrates smoothly with the PyTorch and Hugging Face tools you might already know. Essentially, LeRobot aims to be your one-stop library for robot learning projects.

> [!TIP]
> **Supported Robots:** LeRobot currently supports accessible platforms such as **SO-100/SO-101** (3D‑printable arms) and **ALOHA/ALOHA‑2** (bimanual manipulation). For the up‑to‑date list of supported platforms, see the [official documentation](https://huggingface.co/docs/lerobot).

One key advantage is that LeRobot uses a standardized approach for connecting to different robot platforms. This means adding support for new robots requires much less work than starting from scratch. The library also introduces `LeRobotDataset`, a specialized format for robotics data that the open-source community is already using to share datasets efficiently.

LeRobot includes implementations of many cutting-edge robot learning algorithms, all built with PyTorch for efficiency. It also provides tools for running experiments and tracking results. Perhaps most importantly for real-world applications, LeRobot separates the "thinking" part (planning what to do) from the "doing" part (executing actions). This separation is crucial because it allows robots to react quickly and adapt better when things don't go exactly as planned.

> [!WARNING]
> **Performance Note:** LeRobot's optimized inference stack is crucial for real-time robot control, where delays of even milliseconds can affect performance. This separation of planning and execution is a key innovation.

## References

For a full list of references, check out the [tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial).

- **RT-1: Robotics Transformer for Real-World Control at Scale** (2023)  
  Anthony Brohan et al.  
  This paper demonstrates how transformer architectures can be applied to robotic control at scale, showing the power of learning from large and diverse datasets.  
  [arXiv:2212.06817](https://huggingface.co/papers/2212.06817)

- **Open X-Embodiment: Robotic Learning Datasets and RT-X Models** (2023)  
  Open X-Embodiment Collaboration  
  A collaborative effort to create large-scale, diverse robotic datasets across multiple embodiments, demonstrating the importance of data sharing in advancing robot learning.  
  [arXiv:2310.08864](https://huggingface.co/papers/2310.08864)



<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit1/2.mdx" />

### Code Example: Datasets, in practice
https://huggingface.co/learn/robotics-course/unit1/4.md

# Code Example: Datasets, in practice

This section shows you how to work with robotics datasets from Hugging Face using the LeRobotDataset class. We'll start with simple examples and gradually add complexity, so you can copy and adapt the approach that best fits your project.

The key thing to understand is that any dataset on the Hub that follows LeRobot's format (with tabular data, visual data, and metadata included) can be loaded with just one line of code.

When working with robotics data, you often need to look at multiple time steps at once rather than single data points. Why? Most robot learning algorithms need to see how things change over time. For example, to pick up an object, a robot might need to see what happened in the last few moments to understand the current situation better. Similarly, many algorithms work better when they can plan several actions ahead rather than just deciding what to do right now.

LeRobotDataset makes this easy with "temporal windowing." You simply declare which time offsets you want (i.e. current frame plus the two previous ones), and it automatically handles the complexity of getting those frames, even when some might be missing at the beginning or end of an episode.

![streaming-multiple-frames](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobotdataset-v3/streaming-multiple-frames.png)

> [!TIP]
> **Temporal Windows Explained:** 
> - **Observation history**: `[-0.2, -0.1, 0.0]` gives you 200ms, 100ms, and current observations
> - **Action sequences**: `[0.0, 0.1, 0.2]` provides current and next 2 actions (100ms apart)
> - **Automatic padding**: Missing frames at episode boundaries are handled automatically. The datasets always returns the requested number of frames, and it applies padding where necessary.
> - **Mask included**: Know which frames are real vs. padded for proper training

Conveniently, by using LeRobotDataset with a PyTorch `DataLoader` one can automatically collate the individual sample dictionaries from the dataset into a single dictionary of batched tensors for downstream training or inference. LeRobotDataset also natively supports streaming mode for datasets. Users can stream data of a large dataset hosted on the Hugging Face Hub, with a one-line change in their implementation. Streaming datasets supports high-performance batch processing (ca. 80-100 it/s, varying on connectivity) and high levels of frames randomization, key features for practical BC algorithms which otherwise may be slow or operating on highly non-i.i.d. data. This feature is designed to improve on accessibility so that large datasets can be processed by users without requiring large amounts of memory and storage.

Here are different ways to set up temporal windows depending on your use case. Skim the options and pick one to start—switching later is just a change to the dictionary.

<hfoptions id="temporal-windows">
<hfoption id="basic-bc">

**Basic Behavioral Cloning** (learn current action from current observation):

```python
# Simple: current observation → current action
delta_timestamps = {
    "observation.images.wrist_camera": [0.0],  # Just current frame
    "action": [0.0]  # Just current action
}

dataset = LeRobotDataset(
    "lerobot/svla_so101_pickplace", 
    delta_timestamps=delta_timestamps
)
```

</hfoption>
<hfoption id="history-bc">

**History-Based BC** (use observation history for better decisions):

```python
# Use observation history for context
delta_timestamps = {
    "observation.images.wrist_camera": [-0.2, -0.1, 0.0],  # 200ms history
    "action": [0.0]  # Current action
}

dataset = LeRobotDataset(
    "lerobot/svla_so101_pickplace",
    delta_timestamps=delta_timestamps
)

sample = dataset[100]
# Images shape: [3, C, H, W] - 3 historical frames
# Action shape: [action_dim] - single current action
```

</hfoption>
<hfoption id="action-chunking">

**Action Chunking** (predict action sequences for smoother control):

```python
# Predict multiple future actions at once
delta_timestamps = {
    "observation.images.wrist_camera": [-0.1, 0.0],  # Recent + current
    "action": [0.0, 0.1, 0.2, 0.3]  # Current + 3 future actions
}

dataset = LeRobotDataset(
    "lerobot/svla_so101_pickplace",
    delta_timestamps=delta_timestamps
)

sample = dataset[100] 
# Images shape: [2, C, H, W] - 2 observation frames
# Action shape: [4, action_dim] - 4 action predictions
```

</hfoption>
</hfoptions>

### Streaming Large Datasets

> [!TIP]
> **When to use streaming:**
> - **Dataset > available storage** - Stream datasets that don't fit on your disk
> - **Experimentation** - Quickly try different datasets without downloading
> - **Cloud training** - Reduce startup time by streaming from Hugging Face Hub
> - **Network available** - Requires stable internet connection during training
>
> **Performance:** Streaming achieves 80-100 it/s with good connectivity! That is (on average) comparable with locally-stored datasets, factoring out initialization overhead.

<hfoptions id="dataset-loading">
<hfoption id="download">

**Download Dataset** (faster training, requires storage):

```python
from lerobot.datasets.lerobot_dataset import LeRobotDataset

# Downloads dataset to local cache
dataset = LeRobotDataset("lerobot/svla_so101_pickplace")

# Fastest access after download
sample = dataset[100]
```

</hfoption>
<hfoption id="streaming">

**Stream Dataset** (no storage needed, requires internet):

```python
from lerobot.datasets.streaming_dataset import StreamingLeRobotDataset

# Stream data without downloading
streaming_dataset = StreamingLeRobotDataset(
    "lerobot/svla_so101_pickplace",
    delta_timestamps=delta_timestamps
)

# Works exactly like regular dataset
sample = streaming_dataset[100]
```

</hfoption>
</hfoptions>

## Training Integration

You can easily integrate regular and streaming datasets with torch data loaders. This makes integrating any LeRobotDataset with your own (`torch`) training loop rather convenient. Because we fetch all frames from the datasets as a tensor, wrapping iterating over a dataset with training is particularly straightforward.  

### PyTorch DataLoader
You can easily integrate regular and streaming datasets with torch data loaders. This makes integrating any LeRobotDataset with your own (`torch`) training loop rather convenient. Because we fetch all frames from the datasets as a tensor, wrapping iterating over a dataset with training is particularly straightforward.
```python
import torch
from torch.utils.data import DataLoader
# Create DataLoader for training
dataloader = DataLoader(
    dataset,
    batch_size=16,
    shuffle=True,
    num_workers=4
)

# Training loop
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

for batch in dataloader:
    # Move to device
    observations = batch["observation.state"].to(device)
    actions = batch["action"].to(device)
    images = batch["observation.images.wrist_camera"].to(device)
    
    # Your model training here
    # loss = model(observations, images, actions)
    # loss.backward()
    # optimizer.step()
```
## Why This Matters

This simple API hides significant complexity:
- ✅ **Multi-modal synchronization** - Images and sensors perfectly aligned
- ✅ **Efficient storage** - Compressed videos, memory-mapped arrays
- ✅ **Temporal handling** - Easy access to observation/action sequences  
- ✅ **Scalability** - Same code works for small and massive datasets

Compare this to traditional robotics data handling, which often requires:
- Custom parsers for each data format
- Manual synchronization across modalities
- Complex buffering for temporal windows
- Platform-specific loading code

LeRobotDataset **standardizes and simplifies** all of this!

<!-- TODO: Small table comparing "Traditional" vs "LeRobotDataset" (parsers, sync, buffering, platform code). -->


## Section Quiz

Test your understanding of LeRobot and its role in robot learning:

### 1. What makes LeRobot different from traditional robotics libraries?

<Question
	choices={[
		{
			text: "It only works with simulation environments.",
			explain: "LeRobot actually focuses on real-world robots and supports many physical platforms."
		},
		{
			text: "It provides end-to-end integration across the entire robotics stack with state-of-the-art learning algorithms.",
			explain: "LeRobot's key innovation is combining hardware control, data handling, and learning algorithms in one unified library.",
            correct: true
		},
		{
			text: "It requires expensive industrial robots to function.",
			explain: "LeRobot focuses on accessible, low-cost robots to democratize robotics."
		},
        {
			text: "It only supports classical control methods.",
			explain: "LeRobot specifically focuses on learning-based approaches, not classical control."
		}
	]}
/>

### 2. Which of the following is NOT a key component of LeRobot's approach?

<Question
	choices={[
		{
			text: "Unified low-level robot configuration handling",
			explain: "This is indeed a key component that enables cross-platform compatibility."
		},
		{
			text: "Native robotics dataset format (LeRobotDataset)",
			explain: "LeRobotDataset is a central innovation of the library."
		},
		{
			text: "Requiring expert knowledge for each new robot platform",
			explain: "This is actually what LeRobot aims to eliminate - it reduces the expertise barrier.",
            correct: true
		},
        {
			text: "State-of-the-art learning algorithms with PyTorch implementations",
			explain: "SOTA algorithms are a core feature of LeRobot."
		}
	]}
/>

### 3. What is the main advantage of LeRobot's optimized inference stack?

<Question
	choices={[
		{
			text: "It makes training faster on GPUs.",
			explain: "The inference stack is about deployment, not training speed."
		},
		{
			text: "It reduces the memory requirements for storing datasets.",
			explain: "Memory reduction is handled by the dataset format, not the inference stack."
		},
		{
			text: "It decouples action planning from action execution for better real-time performance.",
			explain: "This separation is crucial for real-time robot control where millisecond delays matter.",
            correct: true
		},
        {
			text: "It automatically generates training data from robot interactions.",
			explain: "Data generation is not handled by the inference stack."
		}
	]}
/>

### 4. Which types of robotic platforms does LeRobot support?

<Question
	choices={[
		{
			text: "Only manipulation robots like robotic arms.",
			explain: "LeRobot supports much more than just manipulation platforms."
		},
		{
			text: "Manipulation, locomotion, and whole-body control platforms.",
			explain: "LeRobot supports the full spectrum of robotic platforms, from simple arms to complex humanoids.",
            correct: true
		},
		{
			text: "Only robots that cost more than $10,000.",
			explain: "LeRobot focuses on accessible, low-cost platforms to democratize robotics."
		},
        {
			text: "Only robots manufactured by specific companies.",
			explain: "LeRobot supports open-source and accessible robots from various sources."
		}
	]}
/>

### 5. What does "end-to-end integration with the robotics stack" mean in the context of LeRobot?

<Question
	choices={[
		{
			text: "It only handles high-level planning, not low-level control.",
			explain: "End-to-end means it covers everything from low-level control to high-level algorithms."
		},
		{
			text: "It covers everything from low-level hardware control to high-level learning algorithms.",
			explain: "This comprehensive coverage eliminates the need to integrate multiple separate tools.",
            correct: true
		},
		{
			text: "It requires separate tools for data handling and model training.",
			explain: "End-to-end integration means you don't need separate tools - everything is unified."
		},
        {
			text: "It only works with specific operating systems.",
			explain: "Platform integration refers to robotics components, not operating systems."
		}
	]}
/>

### 6. What is the primary purpose of the `delta_timestamps` parameter in LeRobotDataset?

<Question
	choices={[
		{
			text: "It sets the frame rate for video recording.",
			explain: "Frame rates are stored in metadata, not controlled by delta_timestamps."
		},
		{
			text: "It defines temporal windows to access observation histories and action sequences.",
			explain: "delta_timestamps allows you to specify which time offsets to include, enabling access to past observations and future actions.",
            correct: true
		},
		{
			text: "It synchronizes data across different robots.",
			explain: "Synchronization across robots is not handled by delta_timestamps."
		},
        {
			text: "It compresses video data for storage efficiency.",
			explain: "Video compression is handled separately in the dataset storage format."
		}
	]}
/>

### 7. Which of the following best describes the three main components of LeRobotDataset?

<Question
	choices={[
		{
			text: "Images, Actions, and Rewards",
			explain: "While these are important data types, they don't describe the architectural components."
		},
		{
			text: "Tabular Data, Visual Data, and Metadata",
			explain: "These are the three architectural pillars: efficient storage for sensor data, compressed videos, and JSON metadata files.",
            correct: true
		},
		{
			text: "Training, Validation, and Test sets",
			explain: "These are data splits, not the architectural components of the format."
		},
        {
			text: "Simulation, Real Robot, and Hybrid data",
			explain: "These describe data sources, not the storage architecture."
		}
	]}
/>

### 8. What happens when you use `StreamingLeRobotDataset` instead of `LeRobotDataset`?

<Question
	choices={[
		{
			text: "The data is automatically augmented for better training.",
			explain: "Streaming doesn't involve data augmentation - that's a separate preprocessing step."
		},
		{
			text: "The dataset is downloaded faster to your local machine.",
			explain: "Streaming actually avoids downloading the dataset entirely."
		},
		{
			text: "Data is streamed from the Hugging Face Hub without downloading, saving storage space.",
			explain: "StreamingLeRobotDataset allows you to process large datasets without downloading them locally.",
            correct: true
		},
        {
			text: "The dataset is automatically split into train/validation sets.",
			explain: "Data splitting is independent of the streaming vs download choice."
		}
	]}
/>

### 9. In the context of robot learning, what does "temporal windowing" refer to?

<Question
	choices={[
		{
			text: "The time it takes to train a robot learning model.",
			explain: "Training time is not what temporal windowing refers to."
		},
		{
			text: "Accessing multiple time steps of observations and actions around a given frame.",
			explain: "Temporal windowing allows algorithms to use observation history and action sequences, crucial for robot learning.",
            correct: true
		},
		{
			text: "The frequency at which robot sensors collect data.",
			explain: "Sensor frequency is separate from temporal windowing in datasets."
		},
        {
			text: "The duration of each robot episode or trajectory.",
			explain: "Episode duration is different from temporal windowing within episodes."
		}
	]}
/>

### 10. What is the main advantage of LeRobotDataset's approach to storing video data?

<Question
	choices={[
		{
			text: "Videos are stored in the highest possible quality.",
			explain: "Quality isn't the main focus - efficiency and scalability are."
		},
		{
			text: "Each frame is stored as a separate file for easy access.",
			explain: "This would actually be inefficient - LeRobotDataset does the opposite."
		},
		{
			text: "Multiple episodes are concatenated into larger MP4 files to reduce file system stress.",
			explain: "This approach dramatically reduces the number of files, making storage more efficient for large datasets.",
            correct: true
		},
        {
			text: "Videos are automatically compressed using AI algorithms.",
			explain: "Standard video compression is used, not AI-based compression."
		}
	]}
/>

### 11. Which statement about LeRobotDataset's compatibility is correct?

<Question
	choices={[
		{
			text: "It only works with specific robot brands like SO-100.",
			explain: "LeRobotDataset is designed to work across many different robot platforms."
		},
		{
			text: "It requires custom code for each new robot platform.",
			explain: "The unified format reduces the need for custom code per platform."
		},
		{
			text: "It integrates seamlessly with PyTorch DataLoader and Hugging Face ecosystems.",
			explain: "This integration makes it easy to use robotics data with existing ML workflows.",
            correct: true
		},
		{
			text: "It only supports simulation data, not real robot data.",
			explain: "LeRobotDataset supports both simulation and real robot data."
		}
	]}
/>

## References

For a full list of references, check out the [tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial).

- **Diffusion Policy: Visuomotor Policy Learning via Action Diffusion** (2024)  
  Cheng Chi et al.  
  This paper introduces diffusion models for robot policy learning and discusses how temporal windowing and action chunking enable smooth visuomotor control.  
  [arXiv:2303.04137](https://huggingface.co/papers/2303.04137)

- **RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control** (2023)  
  Anthony Brohan et al.  
  Demonstrates how vision-language models can be fine-tuned for robotic control, including discussion of temporal context windows and action prediction horizons.  
  [arXiv:2307.15818](https://huggingface.co/papers/2307.15818)



<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit1/4.mdx" />

### Understanding Robot Kinematics
https://huggingface.co/learn/robotics-course/unit2/3.md

# Understanding Robot Kinematics

In this section, we'll build intuition for robot kinematics through a concrete, worked example. Kinematics describes the mathematical relationship between joint angles and end-effector positions - given the joint configuration, where does the robot's hand end up? We'll explore this fundamental concept by walking through a simplified but representative case.

Let's examine how traditional robotics approaches robot control using a specific example you can follow step by step.

## From Complex to Simple: The SO-100 Example

We'll start with a familiar robot platform and systematically simplify it to isolate the core kinematic principles without unnecessary complexity.

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-so100-to-planar-manipulator.png" alt="SO-100 to Planar Manipulator" style="width: 70%;" />

The SO-100 arm simplified to a 2D planar manipulator by constraining some joints.

The SO-100 is a 6-degree-of-freedom (6-DOF) robot arm. To understand the principles, let's simplify it to a **2-DOF planar manipulator** by constraining some joints.

## The Simplified Robot

<!-- TODO: Reimplement this in vanilla python -->

To keep the math clear, our simplified robot has:
- **Two joints** with angles θ₁ and θ₂  
- **Two links** of equal length *l*
- **Configuration** q = [θ₁, θ₂] ∈ [-π, +π]²

## Forward Kinematics: From Joints to Position

**Question:** Given joint angles θ₁ and θ₂, where is the end-effector?

**Answer:** We can calculate the end-effector position mathematically:

$$p(q) = \begin{pmatrix} l \cos(\theta_1) + l \cos(\theta_1 + \theta_2) \\ l \sin(\theta_1) + l \sin(\theta_1 + \theta_2) \end{pmatrix}$$

This is called **Forward Kinematics (FK)** - mapping from joint space to task space.

<!-- TODO: Small diagram: two-link arm in the plane with angles θ₁, θ₂ and link length l, showing how the two vector contributions add head-to-tail to reach p(q). -->

> [!TIP]
> **Understanding the Math:** This equation comes from basic trigonometry! 
> - First link: endpoint at $(l \cos(\theta_1), l \sin(\theta_1))$
> - Second link: starts from first link's end, rotated by $\theta_1 + \theta_2$
> - Final position: sum of both link contributions
>
> **Why it matters:** For a simple robot, FK is relatively easy - given joint angles, we can always compute where the robot's hand is. More complex robots - for instance, dexterous hands, are more challenging to model.

## Inverse Kinematics: From Position to Joints

Now let’s try to invert the mapping: given a desired hand position, what joint configuration achieves it?

**Question:** Given a desired end-effector position p*, what joint angles should we use?

**Answer:** This is **Inverse Kinematics (IK)** - typically more intricate to solve, as it deals with the Jacobian matrix of the forward kinematics function!

We need to solve: $p(q) = p^*$

In general, this becomes an optimization problem:
$$\min_{q \in \mathcal{Q}} \|p(q) - p^*\|_2^2$$

<!-- TODO: Workspace sketch: a reachable annulus for a 2‑link arm (inner radius |l₁−l₂|, outer radius l₁+l₂), with an in‑workspace and out‑of‑workspace target p*. -->

> [!WARNING]
> **Why IK is Hard:**
> - **Multiple solutions** - Same end position can be reached with different joint angles
>
> - **Nonlinear equations** - Some functions make analytical solutions difficult
> - **Constraints** - Joint limits and obstacles further complicate the problem
>
> This is why robotics engineers spend so much time on IK algorithms!

## The Challenge of Constraints

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-planar-manipulator-free.png" alt="Free Motion" style="width: 100%; max-width: 200px;" />

*Free to move*

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-planar-manipulator-floor.png" alt="Floor Constraint" style="width: 100%; max-width: 200px;" />

*Constrained by floor*

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-planar-manipulator-floor-shelf.png" alt="Multiple Constraints" style="width: 100%; max-width: 200px;" />

*Multiple obstacles*

Real robots face **constraints**:
- **Physical limits** - Can't move through the floor
- **Obstacles** - Must avoid collisions  
- **Joint limits** - Finite range of motion

These constraints make the feasible configuration space $\mathcal{Q}$ much more complex!

## Key Insight

Even for this **simple 2-DOF robot**, solving IK with constraints is non-trivial. Real robots have:
- **6+ degrees of freedom**
- **Complex geometries**  
- **Dynamic environments**
- **Uncertain models**

Traditional approaches require **extensive mathematical modeling** and **expert knowledge** for each specific case.

> [!TIP]
> Mental model: FK is a direct calculator (q → p) and is usually easy; IK is a search (p → q) and becomes hard as soon as you add workspace limits, obstacles, or joint constraints. When IK gets brittle, we'll switch to differential reasoning (velocities) in the next step.

## References

For a full list of references, check out the [tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial).

- **Modern Robotics: Mechanics, Planning, and Control** (2017)  
  Kevin M. Lynch and Frank C. Park  
  Chapter 4 provides an in-depth treatment of forward kinematics with extensive examples, while Chapter 6 covers inverse kinematics with both analytical and numerical approaches.  
  [Book Website](http://hades.mech.northwestern.edu/index.php/Modern_Robotics)

- **Introduction to Robotics: Mechanics and Control** (2005)  
  John J. Craig  
  A classic textbook with detailed coverage of robot kinematics, including the Denavit-Hartenberg notation and systematic approaches to solving forward and inverse kinematics problems.  
  [Publisher Link](https://www.pearson.com/en-us/subject-catalog/p/introduction-to-robotics-mechanics-and-control/P200000003519)


<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit2/3.mdx" />

### Classical Robotics
https://huggingface.co/learn/robotics-course/unit2/1.md

# Classical Robotics

In this section, we'll build a foundation in classical robotics that will help you understand why learning-based methods are so powerful. 

We'll start by exploring how robots generate motion, look at common types of robot movement, and work through a concrete example before discussing the limitations that motivate modern approaches.

> [!TIP]
> ## Key Takeaway 
>
> Learning-based approaches to robotics address fundamental challenges that traditional methods struggle with. 
>
> Modern robotics needs methods that can work across different tasks and robot types, allowing one approach to work effectively in many situations rather than requiring custom solutions for each problem. We also need to reduce our dependency on human experts who manually design rules and models for every situation. Finally, the field needs approaches that can take advantage of the rapidly growing collection of robotics datasets, learning from the collective knowledge captured in these large-scale data collections.

## Different Approaches to Robot Motion

Let's start with the big picture: how do different approaches make robots move?

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-approaches.png" alt="A diagram showing different approaches to robot motion generation, organized into two main categories: explicit dynamics-based methods on the left (including classical control, model predictive control, and trajectory optimization) and implicit learning-based methods on the right (including reinforcement learning, imitation learning, and neural networks). The diagram illustrates the spectrum from model-based to data-driven approaches in robotics." style="width: 50%;" />

Different methods for generating robot motion can be grouped based on whether they use explicit mathematical models or learn patterns implicitly from data.

This is merely an overview of different methods to generate motion, and is clearly non-exhaustive. Still, it provides a good primer on what the most common approaches are in this circumstance. The most important grouping by far depend on whether the different methods explicitly (_dynamics-based_) or implicitly (_learning-based_) model robot-environment interactions.

Further, knowledge of mechanical, electrical, and software engineering, as well as rigid‑body mechanics and control theory have proven quintessential in robotics since the field first developed in the 1950s. More recently, Machine Learning (ML) has also proved effective in robotics, complementing these more traditional disciplines.

As a direct consequence of its multi‑disciplinary nature (to the very least, combining hardware and software), robotics has developed as a wide array of methods, all concerned with the main purpose of **producing artificial motion in the physical world**.

<!-- TODO: Small comparison table: Explicit vs Implicit vs Hybrid (inputs, knowledge, pros/cons). -->

In this section, our goal is to introduce where classical methods excel, where they struggle, and why **learning‑based approaches** are helpful.

> [!TIP]
> **Explicit vs Implicit Models:**
>
> **Implicit (learning-based) approaches** take a fundamentally different strategy by learning patterns directly from data rather than requiring explicit mathematical models. These methods require less domain-specific engineering and can adapt to complex, uncertain environments that would be difficult to model analytically. Neural networks and reinforcement learning algorithms are prime examples of this approach.
>
> **Explicit (dynamics-based) approaches** rely on hand-crafted mathematical models of physics and require deep domain expertise to be implemented effectively. These methods work exceptionally well for well-understood, controlled scenarios where the physics can be precisely modeled. Classic examples include PID controllers and Model Predictive Control systems that have been the backbone of industrial robotics for decades.  
>
> **Hybrid approaches** represent an exciting middle ground, combining the reliability of physics knowledge with the adaptability of learning systems. These methods use physics knowledge to guide and constrain the learning process, often achieving better performance than either approach alone.

## Different Types of Motion

Now that we have the big picture, we can situate the problem: what kinds of motion do robots typically perform?

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-platforms.png" alt="A collection of six different robotic platforms showing the diversity of robot designs: ViperX (a small desktop robotic arm), SO-100 (an open-source 3D-printable arm), Boston Dynamics' Spot (a four-legged walking robot), Open-Duck (a wheeled mobile robot), 1X's NEO (a humanoid robot), and Boston Dynamics' Atlas (an advanced bipedal humanoid robot). The image demonstrates how different robot designs are optimized for different types of motion and tasks." style="width: 70%;" />

Different kinds of motions are achieved with potentially very different robotic platforms. From left to right, top to bottom: ViperX, SO-100, Boston Dynamics' Spot, Open-Duck, 1X's NEO, Boston Dynamics' Atlas. This is an example list of robotic platforms and is (very) far from being exhaustive.

At a high level, most systems you’ll encounter fall into one of these three categories. Knowing which bucket you’re in helps you choose models, datasets, and controllers appropriately.

In the vast majority of instances, robotics deals with producing motion via actuating joints connecting nearly entirely-rigid links. A key distinction between focus areas in robotics is based on whether the generated motion modifies the absolute state of the environment through dexterous interactions, changes the relative state of the robot with respect to its environment through mobility, or combines both capabilities.

**Manipulation** involves generating motion to perform actions that induce desirable modifications in the environment. These effects are typically achieved *through* the robot - for example, a robotic arm grasping objects, assembling components, or using tools. The robot changes the world around it while remaining in a fixed location.

**Locomotion** encompasses motions that result in changes to the robot's physical location within its environment. This general category includes both *wheeled locomotion* (like mobile bases and autonomous vehicles) and *legged locomotion* (like walking robots and quadrupeds), depending on the mechanism the robot uses to move through its environment.

<!-- TODO: Diagram: three boxes (Manipulation, Locomotion, Mobile Manipulation) with 1–2 concrete examples each; arrows showing shared sensing (vision/touch) but different action spaces. -->

> [!TIP]
> Quick classifier: ask "what changes?" If mainly the world changes (object pose/state), you're in manipulation. If mainly the robot state changes, you're in locomotion. If both change meaningfully within the task, you're in mobile manipulation. This simple test helps when designing observations, actions, and evaluation.

We’ll reuse this taxonomy when discussing datasets (what sensors you need) and policies (what action spaces you predict) in the next sections.

## Example: Planar Manipulation

Let’s ground the ideas with a concrete, minimal example you can reason about step by step.

Robot manipulators typically consist of a series of links and joints, articulated in a chain finally connected to an *end-effector*. Actuated joints are considered responsible for generating motion of the links, while the end effector is instead used to perform specific actions at the target location (e.g., grasping/releasing objects via closing/opening a gripper end-effector, using a specialized tool like a screwdriver, etc.).

Recently, the development of low-cost manipulators like the ALOHA, ALOHA-2 and SO-100/SO-101 platforms significantly lowered the barrier to entry to robotics, considering the increased accessibility of these robots compared to more traditional platforms like the Franka Emika Panda arm.

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-cost-accessibility.png" alt="Robot Cost Comparison" style="width: 40%;" />

Cheaper, more accessible robots are starting to rival traditional platforms like the Panda arm platforms in adoption in resource-constrained scenarios. The SO-100, in particular, has a cost in the 100s of Euros, and can be entirely 3D-printed in hours, while the industrially-manufactured Panda arm costs tens of thousands of Euros and is not openly available.

### Forward and Inverse Kinematics

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-so100-to-planar-manipulator.png" alt="SO-100 to Planar Manipulator" style="width: 70%;" />

The SO-100 arm simplified to a 2D planar manipulator by preventing some joints from moving.

Consider a simplified version of the SO-100 where we prevent some joints from moving. This reduces the complexity from 6 degrees of freedom to just 2 (plus the gripper). We can control two angles θ₁ and θ₂, which together define the robot's configuration: q = [θ₁, θ₂].

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-planar-manipulator-free.png" alt="Free Motion" style="width: 100%; max-width: 200px;" />

*Free to move*

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-planar-manipulator-floor.png" alt="Floor Constraint" style="width: 100%; max-width: 200px;" />

*Constrained by the surface*

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-planar-manipulator-floor-shelf.png" alt="Multiple Constraints" style="width: 100%; max-width: 200px;" />

*Constrained by surface and (fixed) obstacle*

Considering this example, we can analytically write the end-effector's position $p \in \mathbb{R}^2$ as a function of the robot's configuration, $p = p(q)$:

$$p(q) = \begin{pmatrix} l \cos(\theta_1) + l \cos(\theta_1 + \theta_2) \\ l \sin(\theta_1) + l \sin(\theta_1 + \theta_2) \end{pmatrix}$$

**Forward Kinematics (FK)** maps a robot configuration into the corresponding end-effector pose, whereas **Inverse Kinematics (IK)** is used to reconstruct the configuration(s) given an end-effector pose.

In the simplified case here considered, one can solve the problem of controlling the end-effector's location to reach a goal position $p^*$ by solving analytically for $q: p(q) = p^*$. However, in the general case, one might not be able to solve this problem analytically, and can typically resort to iterative optimization methods:

$$\min_{q \in \mathcal{Q}} \|p(q) - p^*\|_2^2$$

Exact analytical solutions to IK are even less appealing when one considers the presence of obstacles in the robot's workspace, resulting in constraints on the possible values of $q$.

> [!TIP]
> If the math feels dense, focus on the mapping: FK answers "where is the hand given the joints?", IK asks "what joints reach that hand position?". The rest of the unit shows why the IK direction becomes hard in realistic settings.

### Differential Inverse Kinematics

When IK is hard to solve directly, we can often make progress by working with small motions (velocities) instead of absolute positions.

Let $J(q)$ denote the Jacobian matrix of (partial) derivatives of the FK-function. Then, one can apply the chain rule to any $p(q)$, deriving $\dot{p} = J(q) \dot{q}$, and thus finally relating variations in the robot configurations to variations in pose.

Given a desired end-effector trajectory, differential IK finds $\dot{q}(t)$ solving for joints' *velocities* instead of *configurations*:

$$\dot{q}(t) = \arg\min_\nu \|J(q(t)) \nu - \dot{p}^*(t)\|_2^2$$

This often admits the closed-form solution $\dot{q} = J(q)^+ \dot{p}^*$, where $J^+(q)$ denotes the Moore-Penrose pseudo-inverse of $J(q)$.

<!-- TODO: Micro-diagram: arrows from q -> J(q) -> p with small velocity vectors. -->

### Adding Feedback Loops

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-planar-manipulator-floor-box.png" alt="Moving Obstacle" style="width: 100%;" />



While very effective when a goal trajectory has been well specified, the performance of differential IK can degrade significantly in the presence of modeling/tracking errors, or in the presence of non-modeled dynamics in the environment.

To mitigate the effect of modeling errors, sensing noise and other disturbances, classical pipelines indeed do augment differential IK with feedback control looping back quantities of interest. In practice, following a trajectory with a closed feedback loop might consist in backwarding the error between the target and measured pose, $\Delta p = p^* - p(q)$, hereby modifying the control applied to $\dot{q} = J(q)^+ (\dot{p}^* + k_p \Delta p)$, with $k_p$ defined as the (proportional) gain.

More advanced techniques for control consisting in feedback linearization, PID control, Linear Quadratic Regulator (LQR) or Model-Predictive Control (MPC) can be employed to stabilize tracking and reject moderate perturbations.

<!-- TODO: Block diagram: desired trajectory → controller → robot → sensors → error → controller. -->

## Limitations of Dynamics-based Robotics

This brings us to the “so what?”: where do these classical tools struggle in practice, and why does that motivate learning?

Despite the last 60+ years of robotics research, autonomous robots are still largely incapable of performing tasks at human-level performance in the physical world generalizing across (1) robot embodiments (different manipulators, different locomotion platforms, etc.) and (2) tasks (tying shoe-laces, manipulating a diverse set of objects).

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-classical-limitations.png" alt="Classical Limitations" style="width: 90%;" />

Dynamics-based approaches to robotics suffer from several limitations: (1) orchestrating multiple components poses integration challenges; (2) the need to develop custom processing pipelines for the sensing modalities and tasks considered hinders scalability; (3) simplified analytical models of physical phenomena limit real-world performance. Lastly, (4) dynamics-based methods overlook trends in the availability and growth of robotics data.


### Key Limitations

**1. Integration Challenges**
Dynamics-based robotics pipelines have historically been **developed sequentially, engineering the different blocks** now within most architectures for specific purposes. That is, sensing, state estimation, mapping, planning, (diff-)IK, and low-level control have been traditionally developed as distinct modules with fixed interfaces. Pipelining these specific modules proved error-prone, and brittleness emerges—alongside compounding errors—whenever changes incur.

**2. Limited Scalability** 
Classical planners operate on compact, assumed-sufficient state representations; extending them to reason directly over raw, heterogeneous and noisy data streams is non-trivial. This results in a **limited scalability to multimodal data and multitask settings**, as incorporating high-dimensional perceptual inputs (RGB, depth, tactile, audio) traditionally required extensive engineering efforts to extract meaningful features for control.

**3. Modeling Limitations**
Setting aside integration and scalability challenges: developing accurate modeling of contact, friction, and compliance for complicated systems remains difficult. Rigid-body approximations are often insufficient in the presence of deformable objects, and **relying on approximated models hinders real-world applicability** of the methods developed.

**4. Overlooking Data Trends**
Lastly, dynamics-based methods (naturally) overlook the rather recent **increase in availability of openly-available robotics datasets**. The curation of academic datasets by large centralized groups of human experts in robotics is now increasingly complemented by a **growing number of robotics datasets contributed in a decentralized fashion** by individuals with varied expertise.

Taken together, these limitations motivate the exploration of learning-based approaches that can:
1. **Integrate perception and control more tightly**
2. **Adapt across tasks and embodiments** with reduced expert modeling interventions
3. **Scale gracefully in performance** as more robotics data becomes available

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robotics-course/classical-vs-robot-learning.png"  
     alt="Classical vs Robot Learning"  
     width="600" height="200">  

## References

For a full list of references, check out the [tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial).

- **Modern Robotics: Mechanics, Planning, and Control** (2017)  
  Kevin M. Lynch and Frank C. Park  
  A comprehensive textbook covering the foundations of classical robotics, including kinematics, dynamics, and control. Essential reading for understanding the traditional approaches discussed in this unit.  
  [Book Website](http://hades.mech.northwestern.edu/index.php/Modern_Robotics)

- **Springer Handbook of Robotics** (2016)  
  Edited by Bruno Siciliano and Oussama Khatib  
  An authoritative reference covering all aspects of robotics, from classical control theory to emerging learning-based approaches.  
  [DOI:10.1007/978-3-319-32552-1](https://doi.org/10.1007/978-3-319-32552-1)


<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit2/1.mdx" />

### Types of Robot Motion
https://huggingface.co/learn/robotics-course/unit2/2.md

# Types of Robot Motion

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-platforms.png" alt="Robotics Platforms" style="width: 70%;" />

Different kinds of motions require very different robotic platforms. From left to right, top to bottom: ViperX, SO-100, Boston Dynamics' Spot, Open-Duck, 1X's NEO, Boston Dynamics' Atlas.

In this section, we'll organize the space of robot behaviors so you can quickly recognize what kind of problem you're solving and pick appropriate tools.

Most robotics involves creating motion by controlling joints that connect rigid links. The key distinction between different areas of robotics comes down to what the robot is trying to change: the world around it, its own position in the world, or both.

Most problems fall into one of three categories:

**Manipulation** involves the robot changing the environment around it while staying in a fixed location. The robot acts on the world - grasping objects, assembling parts, or using tools. Think of a factory robot arm that picks up parts and puts them together.

**Locomotion** involves the robot changing its position in the environment. This includes wheeled robots (like mobile bases and autonomous cars) and legged robots (like walking robots and quadrupeds) that move through their environment.

**Mobile Manipulation** combines both capabilities, creating systems that can both move through their environment and manipulate objects. These problems are more complex because they need to coordinate many more control variables than either locomotion or manipulation alone.

<!-- TODO: Diagram: three side-by-side boxes (Manipulation, Locomotion, Mobile Manipulation) with 1–2 examples each (e.g., pick-and-place; quadruped walking; mobile base with arm). Include typical observations (RGB, depth, proprioception) and action spaces (joint velocities vs base velocity). -->

> [!TIP]
> Quick rule of thumb: ask "what changes most?" If mainly the world (object pose/state) changes, it's manipulation. If mainly the robot pose changes, it's locomotion. If both change in a tightly coupled way, it's mobile manipulation. Use this to decide sensors to log and the action space to predict.

Recently, the development of low-cost manipulators like the ALOHA, ALOHA-2 and SO-100/SO-101 platforms significantly lowered the barrier to entry to robotics, considering the increased accessibility of these robots compared to more traditional platforms like the Franka Emika Panda arm.

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-cost-accessibility.png" alt="Robot Cost Comparison" style="width: 40%;" />

Cheaper, more accessible robots are starting to rival traditional platforms like the Panda arm platforms in adoption in resource-constrained scenarios. The SO-100, in particular, has a cost in the 100s of Euros, and can be entirely 3D-printed in hours, while the industrially-manufactured Panda arm costs tens of thousands of Euros and is not openly available.

The traditional body of work developed since the very inception of robotics is increasingly complemented by learning-based approaches. ML has indeed proven particularly transformative across the entire robotics stack, first empowering planning-based techniques with improved state estimation used for traditional planning and then end-to-end replacing controllers, effectively yielding perception-to-action methods.

While explicit models have proven fundamental in achieving important milestones towards the development of modern robotics, recent works leveraging implicit models proved particularly promising in surpassing scalability and applicability challenges via learning.

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robotics-course/classical-vs-robot-learning.png"  
     alt="Classical vs Robot Learning"  
     width="600" height="200">  

We'll reuse this taxonomy in later units when we discuss datasets (modalities to record) and policies (what action chunks to predict) for each category.

## References

For a full list of references, check out the [tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial).

- **ALOHA 2: An Enhanced Low-Cost Hardware for Bimanual Teleoperation** (2024)  
  Jorge Aldaco et al.  
  This paper describes advances in accessible manipulation platforms, demonstrating how low-cost hardware enables research across manipulation tasks.  
  [Project Page](https://aloha-2.github.io/)

- **Learning Agile and Dynamic Motor Skills for Legged Robots** (2019)  
  Joonho Hwangbo et al.  
  A key paper demonstrating learning-based approaches to locomotion, showing how reinforcement learning can enable quadrupedal robots to perform complex dynamic movements.  
  [arXiv:1901.08652](https://huggingface.co/papers/1901.08652)


<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit2/2.mdx" />

### From Classical to Learning-Based Robotics
https://huggingface.co/learn/robotics-course/unit2/5.md

# From Classical to Learning-Based Robotics

This chapter ties the classical tools you’ve learned to the motivations for learning‑based methods, then points you to what comes next. We’ll keep it concise and add signposts so you know where to focus.

> [!TIP]
> Don't worry if this page is a bit dense, we'll break it down in the next few units.

## What We've Learned So Far

First, a quick recap of the important concepts covered across the foundational units.

<!-- TODO: Small timeline diagram with milestones: Classical → Kinematics → Control → Limitations → Learning motivation. -->

**Motivation for Learning-Based Robotics:** You've explored the fundamental shift occurring in robotics today, moving from classical model-based approaches toward data-driven learning methods. We've established why robot learning is becoming essential for creating more capable and generalizable robotic systems, and how tools like LeRobot are making these advanced techniques accessible to a broader community of researchers and practitioners.

**LeRobot Ecosystem:** You've gained a fundamental understanding of LeRobot's approach to robotics. This includes understanding the vision behind LeRobot as an end-to-end robotics library, aiming at integrating the different aspects of robotics altogether. You have also learned about the LeRobotDataset format, which handles the complexity of multi-modal robotics data, and got practical experience with loading and processing real robotics datasets for machine learning applications.

Next, you will learn how to synthetize autonomous control behaviors directly from data, and deploy them on real-world robots using lerobot.

**Classical Robotics Foundations:** We've examined the traditional approaches to robotics in detail, covering different types of robot motion including manipulation, locomotion, and mobile manipulation. You've learned about forward and inverse kinematics, differential kinematics, and feedback control systems. Most importantly, you've developed an understanding of why classical approaches, despite their mathematical rigor, struggle with the complexity and variability of real-world robotic applications.

## The Learning Revolution

Through our exploration of classical robotics, you've gained a clear understanding of why learning-based approaches represent such a significant advancement in the field.

**Classical approaches face fundamental limitations** that become apparent when dealing with real-world complexity. These methods require extensive mathematical modeling of every aspect of the robot's environment and interactions, which becomes prohibitively difficult for complex scenarios.

They struggle to integrate multi-modal data sources like vision, touch, and proprioception in a unified way. Perhaps most importantly, classical approaches don't scale well across different tasks or robot embodiments—each new application typically requires significant re-engineering and expert knowledge.

**Learning-based approaches offer compelling advantages** that directly address these limitations. Instead of requiring experts to model every aspect of the problem, these methods can learn appropriate behaviors and representations directly from data.

They naturally handle multi-modal, high-dimensional inputs through neural network architectures designed for such complexity, can generalize across different tasks and even different robot embodiments, and scale with the availability of data and computational resources.

<!-- TODO: graphic of pros and cons of classical and learning-based approaches -->

## What's Coming Next

The foundational knowledge you've gained prepares you for the advanced topics that follow. 

**Reinforcement Learning for Robotics** will explore how robots can learn optimal behaviors through trial and error interactions with their environment. You'll learn about designing appropriate reward signals for robotics tasks, understand reinforcement learning methods that enable robots to improve their performance over time, and tackle the crucial challenge of sample efficiency—learning effectively with limited real-world interaction data. We'll also cover how LeRobot implements these reinforcement learning algorithms in practice.

**Imitation Learning via Behavioral Cloning** will demonstrate how robots can acquire complex skills by observing and copying expert demonstrations. This approach is particularly valuable because it allows robots to learn from real-world human expertise without requiring explicit reward engineering, sidestepping the criticalities associated with using RL in practice. You'll understand how to handle the distribution shift problem that occurs when robots encounter situations not seen in training data, explore advanced imitation learning techniques beyond simple behavioral cloning, and gain practical experience implementing these methods using LeRobot's tools.

**Foundation Models for Robotics** will cover the cutting-edge developments that are creating more general and capable robotic systems. You'll explore how multi-task learning enables knowledge sharing across different robotic platforms and tasks, understand language-conditioned policies that allow robots to follow natural language instructions, and learn about scaling laws that govern how performance improves with larger models and datasets. This section will prepare you to understand and contribute to the development of truly generalist robotic systems.

## Practical Skills Gained

Through this foundational section of the course, you've developed both technical capabilities and conceptual understanding that will serve as the foundation for more advanced topics.

**Technical Skills:** You now understand how robotics data differs from traditional machine learning datasets and why specialized formats are necessary. You've gained practical experience working with the LeRobotDataset API, including loading and processing multi-modal robotics data that combines vision, proprioception, and action information. You've also learned about streaming large datasets efficiently, which is crucial for working with the massive datasets that power modern robot learning systems.

**Conceptual Understanding:** Perhaps most importantly, you've developed a clear mental model of the evolution occurring in robotics today. You understand the historical context of classical approaches, their mathematical foundations, and their fundamental limitations when applied to complex, real-world scenarios. You've gained insight into how learning-based approaches address these limitations and why the availability of large-scale robotics data is transforming what's possible in the field.

## Ready for the Next Challenge?

> [!WARNING]
> **You're now ready for advanced robot learning!** The concepts you've learned about data handling, multi-modal processing, and the limitations of classical approaches will be essential as we dive into:
>
> - **Reinforcement Learning** - How robots learn optimal behaviors through trial and error
> - **Imitation Learning** - How robots learn by watching human demonstrations  
> - **Foundation Models** - How large-scale models are creating general-purpose robotic intelligence
>
> **Coming soon:** These advanced units will build directly on the foundations you've just mastered.


## Course Summary
- Robot learning represents a paradigm shift from model-based to data-driven approaches
- LeRobot democratizes access to state-of-the-art robot learning techniques
- Classical robotics provides important foundations but has fundamental scalability limitations
- Learning-based methods can generalize across tasks, robots, and environments
- The future lies in combining classical insights with learning capabilities
- Large-scale datasets and foundation models are transforming what's possible in robotics

## Community and Resources

As you continue your robot learning journey:

**Keep Learning:**
- [Explore LeRobot documentation](https://huggingface.co/docs/lerobot)
- [Try LeRobot examples](https://github.com/huggingface/lerobot)
- [Give a read to our in-detail tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial)
- [Join the community discussions](https://huggingface.co/lerobot)

**Get Involved:**
- Contribute datasets to the community
- Share your robot learning experiments
- Help improve LeRobot tools and documentation

> [!TIP]
> If you're choosing a first project, start with a small imitation learning task using LeRobotDataset (pick‑and‑place on SO‑100/SO‑101). You'll get end‑to‑end experience—data, model, evaluation—without needing reward design or simulators.

---

**Congratulations on completing the foundational units!** You're now ready to dive into the exciting world of learning-based robotics algorithms.

## References

For a full list of references, check out the [tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial).

- **End-to-End Training of Deep Visuomotor Policies** (2016)  
  Sergey Levine et al.  
  A landmark paper demonstrating how deep learning can be used for direct visuomotor control, bypassing traditional perception and planning modules. This represents a key step in the transition from classical to learning-based robotics.  
  [arXiv:1504.00702](https://huggingface.co/papers/1504.00702)

- **Learning Dexterous In-Hand Manipulation** (2018)  
  OpenAI et al.  
  This paper demonstrates how reinforcement learning with domain randomization can solve complex manipulation tasks that would be extremely difficult to program using classical methods, highlighting the advantages of learning-based approaches.  
  [arXiv:1808.00177](https://huggingface.co/papers/1808.00177)

## Final Chapter Quiz

Test your understanding of the Classical Robotics unit. Choose the best answer for each question.

### 1. Which set best categorizes types of robot motion discussed in this unit?

<Question
	choices={[
		{
			text: "Navigation, mapping, and planning",
			explain: "These are pipeline functions, not motion categories."
		},
		{
			text: "Manipulation, locomotion, and mobile manipulation",
			explain: "These are the three motion categories described in the unit and latex section.",
	            correct: true
		},
		{
			text: "Simulation, perception, and control",
			explain: "These refer to environments or pipeline components, not motion categories."
		},
	        {
			text: "End-effector, joints, and sensors",
			explain: "These are components, not motion categories."
		}
	]}
/>

### 2. What is the essential difference between forward kinematics (FK) and inverse kinematics (IK)?

<Question
	choices={[
		{
			text: "FK maps joint angles to end-effector pose, whereas IK maps a desired pose to joint angles",
			explain: "Matches the unit and latex definition of FK and IK.",
	            correct: true
		},
		{
			text: "FK is about velocities and IK is about positions",
			explain: "Velocities are handled in differential kinematics; FK/IK are mappings between configuration and task space."
		},
		{
			text: "FK is only for 2‑DoF robots",
			explain: "FK/IK apply to robots with arbitrary DoF."
		},
	        {
			text: "IK is always faster to compute than FK",
			explain: "IK is generally harder; FK is usually straightforward."
		}
	]}
/>

### 3. In differential kinematics, what does the Jacobian J(q) represent?

<Question
	choices={[
		{
			text: "A mapping from joint velocities to end‑effector velocities",
			explain: "ṗ = J(q) q̇ is the fundamental relation (latex Section 02).",
	            correct: true
		},
		{
			text: "A mapping from end‑effector positions to joint torques",
			explain: "That would involve dynamics, not the kinematic Jacobian here."
		},
		{
			text: "The feasible workspace of the robot",
			explain: "Workspace depends on link lengths and limits, not J(q) alone."
		},
	        {
			text: "A motion planner for obstacle avoidance",
			explain: "Planning is a separate module."
		}
	]}
/>

### 4. The closed‑form differential IK solution uses J(q)^+. What does the superscript '+' denote?

<Question
	choices={[
		{
			text: "The Moore–Penrose pseudo‑inverse",
			explain: "This is stated explicitly in the unit and latex.",
	            correct: true
		},
		{
			text: "The transpose of the Jacobian",
			explain: "J^T is not what '+' denotes."
		},
		{
			text: "The determinant of the Jacobian",
			explain: "Determinant is a scalar; '+' indicates pseudo‑inverse."
		},
	        {
			text: "The adjugate of the Jacobian",
			explain: "Not used here."
		}
	]}
/>

### 5. How do obstacles and joint limits affect IK in the planar manipulator example?

<Question
	choices={[
		{
			text: "They restrict the feasible set of configurations and make IK harder",
			explain: "Latex Section 02 shows constraints narrowing Q and complicating IK.",
	            correct: true
		},
		{
			text: "They have no effect on IK solutions",
			explain: "This contradicts the constraints discussion."
		},
		{
			text: "They only change the FK mapping",
			explain: "FK formula stays the same; feasibility changes."
		},
	        {
			text: "They guarantee a unique IK solution",
			explain: "Constraints can remove solutions or make multiple remain."
		}
	]}
/>

### 6. Which statement best characterizes classical robotics pipelines as presented in this unit?

<Question
	choices={[
		{
			text: "They are modular stacks with fixed interfaces (perception, state estimation, planning, control)",
			explain: "Matches the modular pipeline described in latex Section 02.",
	            correct: true
		},
		{
			text: "They are single end‑to‑end learned policies",
			explain: "That describes learning‑based approaches, not classical."
		},
		{
			text: "They do not require expert tuning",
			explain: "Expert tuning is a known limitation of classical pipelines."
		},
	        {
			text: "They inherently integrate raw high‑dimensional inputs",
			explain: "Integration of high‑dimensional inputs is a challenge for classical methods."
		}
	]}
/>

### 7. Which set lists the core limitations highlighted for dynamics‑based approaches?

<Question
	choices={[
		{
			text: "Integration challenges, limited scalability, modeling limitations, and overlooking data trends",
			explain: "This is the exact set emphasized in the unit and latex figure.",
	            correct: true
		},
		{
			text: "Cost, accuracy, speed, and energy consumption",
			explain: "Not the four core limitations discussed here."
		},
		{
			text: "Hardware reliability, software licensing, and safety standards",
			explain: "Important topics but not the four cited limitations."
		},
	        {
			text: "Planner optimality, sensor calibration, and actuator wear",
			explain: "These are narrower engineering issues, not the core set."
		}
	]}
/>

### 8. What key advantage of learning‑based approaches is emphasized as a contrast to classical methods?

<Question
	choices={[
		{
			text: "They can learn end‑to‑end from data and generalize across tasks/embodiments",
			explain: "Consistent with the latex and unit discussion on generalization and end‑to‑end training.",
	            correct: true
		},
		{
			text: "They eliminate the need for sensors",
			explain: "Sensors remain essential."
		},
		{
			text: "They require no compute resources",
			explain: "Compute and data scale are central to learning."
		},
	        {
			text: "They do not need datasets",
			explain: "Learning approaches rely on data."
		}
	]}
/>

### 9. Why does the availability of open robotics datasets matter in this unit’s context?

<Question
	choices={[
		{
			text: "It enables learning‑based methods to scale and transfer knowledge",
			explain: "Matches the latex discussion of data trends motivating learning.",
	            correct: true
		},
		{
			text: "It reduces manufacturing costs of robot arms",
			explain: "Manufacturing costs are separate from dataset availability."
		},
		{
			text: "It removes the need for control theory entirely",
			explain: "Control insights still matter, even with learning."
		},
	        {
			text: "It guarantees perfect generalization",
			explain: "No method guarantees perfect generalization."
		}
	]}
/>


<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit2/5.mdx" />

### Control Systems and Their Limitations
https://huggingface.co/learn/robotics-course/unit2/4.md

# Control Systems and Their Limitations

In this section, we step from kinematics to control. We first show how to reason with velocities (differential inverse kinematics, diff-IK), then close the loop with feedback, and finally summarize where classical pipelines struggle in practice.

## Differential Kinematics: A Smarter Approach

Instead of solving for joint positions directly, we can work with **velocities**:

### The Key Insight
If we know the relationship between joint velocities and end-effector velocities, we can control motion more smoothly:

$$\dot{p} = J(q) \dot{q}$$

Where $J(q)$ is the **Jacobian matrix** - the relationship between joint and task space velocities.

<!-- TODO: Micro-diagram: small vector at q in configuration space mapped by J(q) to a small vector at p in task space. -->

### Differential IK Solution
Given a desired end-effector velocity $\dot{p}^*$, find joint velocities:

$$\dot{q} = J(q)^+ \dot{p}^*$$

Where $J(q)^+$ is the **pseudo-inverse** of the Jacobian.

## Adding Feedback Control

Open-loop tracking is brittle under modeling errors and disturbances. We close the loop by feeding back the tracking error.

<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-planar-manipulator-floor-box.png" alt="Moving Obstacle" style="width: 100%;" />

*Dealing with moving obstacles requires feedback control.*

Real environments are **dynamic and uncertain**. We need feedback to handle:
- **Modeling errors** - Our equations aren't perfect
- **Disturbances** - Unexpected forces or obstacles  
- **Sensor noise** - Measurements have uncertainty

### Feedback Control Solution

Combine desired motion with error correction:

$$\dot{q} = J(q)^+ (\dot{p}^* + k_p \Delta p)$$

Where $\Delta p = p^* - p(q)$ is the position error.


> [!TIP]
> Start with small $k_p$ and increase gradually while monitoring oscillations. Use a watchdog (safety stop) and saturate commands to keep the system within safe limits.

## Why Classical Approaches Struggle

With differential reasoning and feedback, many tracking tasks are solvable—on paper. In practice, the system still breaks under real-world complexity for the reasons below.


<img src="https://huggingface.co/robotics-course/images/resolve/main/ch2/ch2-classical-limitations.png" alt="Classical Limitations" style="width: 90%;" />

*Four key limitations of dynamics-based robotics approaches.*


### 1. **Integration Challenges**
Classical pipelines are built from **separate modules**:
- Sensing → State Estimation → Planning → Control → Actuation

**Problems:**
- Errors compound through the pipeline
- Brittle when any component fails
- Hard to adapt to new tasks or robots

### 2. **Limited Scalability**  
Traditional methods struggle with:
- **High-dimensional sensor data** (cameras, LIDAR)
- **Multi-task scenarios** (each task needs custom planning)
- **Multi-modal integration** (vision + touch + proprioception)

### 3. **Modeling Limitations**
Real-world physics is complex:
- **Contact dynamics** - Hard to model precisely
- **Deformable objects** - Beyond rigid-body assumptions
- **Friction and compliance** - Difficult to characterize

### 4. **Ignoring Data Trends**
Classical methods don't leverage:
- **Growing robotics datasets** - Millions of demonstrations available
- **Cross-robot learning** - Insights from other platforms
- **Community knowledge** - Decentralized data collection

## The Learning Alternative

To address these limitations, we contrast a classical modular pipeline with an end-to-end learning policy.

<hfoptions id="robotics-approaches">
<hfoption id="classical">

**Classical Robotics Approach:**

```
Perception → State Estimation → Planning → Control → Actuation
```

**Challenges:**
- Each module needs expert tuning
- Errors compound through pipeline  
- Hard to adapt to new tasks/robots
- Requires precise world models

</hfoption>
<hfoption id="learning">

**Learning-Based Approach:**

```
Raw Sensors → Neural Network → Actions
```

**Benefits:**
- Learn from data - Use demonstrations and experience
- End-to-end training - Optimize the entire pipeline together  
- Generalize across tasks - Share knowledge between different objectives
- Adapt to new robots - Transfer insights across platforms

</hfoption>
</hfoptions>

This is the promise of **robot learning**!


> [!TIP]
> **The Best of Both Worlds:** Modern robot learning often combines classical insights with learning. For example one can combine learning with safety constraints from control theory
>
> Pure learning vs pure classical is a false dichotomy - hybrid approaches have had their successes

---

## Key Takeaways

- Classical robotics relies on explicit mathematical models and expert knowledge
- Forward kinematics is straightforward, but is viable only in quite simple scenarios. Inverse kinematics is more general, but it can be challenging to develop in practice  
- Differential kinematics works with velocities rather than positions for better control
- Classical approaches struggle with integration, scalability, modeling accuracy, and data utilization
- Learning-based methods offer solutions to these fundamental limitations
- The future lies in hybrid approaches that combine classical insights with learning capabilities

> [!TIP]
> Up next, we'll show how learning-based methods (reinforcement learning and imitation learning) absorb some of this complexity by optimizing directly from data.

## References

For a full list of references, check out the [tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial).

- **Feedback Systems: An Introduction for Scientists and Engineers** (2008)  
  Karl Johan Åström and Richard M. Murray  
  A comprehensive introduction to feedback control systems, covering the principles that underlie closed-loop control in robotics.  
  [Book Website](http://www.cds.caltech.edu/~murray/amwiki/index.php/Main_Page)

- **Real-Time Obstacle Avoidance for Manipulators and Mobile Robots** (1986)  
  Oussama Khatib  
  A seminal paper introducing the artificial potential field method for obstacle avoidance, demonstrating how feedback can be used for reactive control in dynamic environments.  
  [DOI:10.1177/027836498600500106](https://doi.org/10.1177/027836498600500106)



<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit2/4.mdx" />

### Welcome to the 🤗 Robotics Course
https://huggingface.co/learn/robotics-course/unit0/1.md

# Welcome to the 🤗 Robotics Course

![Robotics Course](https://huggingface.co/robotics-course/images/resolve/main/ch1/ch1-lerobot-figure1.png)

This free course will take you on a journey, **from classical robotics to modern learning-based approaches**, in understanding, implementing, and applying machine learning techniques to real robotic systems.

This course is based on the [Robot Learning Tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial), which is a comprehensive guide to robot learning for researchers and practitioners. Here, we are attempting to distill the tutorial into a more accessible format for the community.

This first unit will help you onboard. You’ll see the course syllabus and learning objectives, understand the structure and prerequisites, meet the team behind the course, learn about LeRobot and the surrounding Huggnig Face ecosystem, and explore the community resources that support your journey. 

> [!TIP]
> This course bridges theory and practice in Robotics! It's designed for students interested in understanding how machine learning is transforming robotics. Whether you're new to robotics or looking to understand learning-based approaches, this course will guide you step by step.

## What to expect from this course?

Across the course you will study classical robotics foundations and modern learning‑based approaches, learn to use LeRobot, work with real robotics datasets, and implement state‑of‑the‑art algorithms. The emphasis is on practical skills you can apply to real robotic systems.

At the end of this course, you'll understand:

- how robots learn from data
- why learning-based approaches are transforming robotics
- how to implement these techniques using modern tools like LeRobot

## What's the syllabus?

Here is the general syllabus for the robotics course. Each unit builds on the previous ones to give you a comprehensive understanding of Robotics.

| # | Topic | Description | Released |
| - | ----- | ----------- | -------- |
| 0 | Welcome to the Robotics Course | Welcome, prerequisites, and course overview | ✅ |
| 1 | Course Introduction | Introduction to Robot Learning and LeRobot ecosystem | ✅ |
| 2 | Classical Robotics | Traditional approaches and their limitations | ✅ |
| 5 | Reinforcement Learning | How robots learn through trial and error | Coming Soon |
| 6 | Imitation Learning | Learning from demonstrations and behavioral cloning | Coming Soon |
| 7 | Foundation Models | Large-scale models for general robotics | Coming Soon |

<!-- TODO: update the syllabus with final changes -->

## What are the prerequisites?

You should be comfortable with basic Python (variables, functions, loops). Elementary linear algebra and calculus help for a full understanding but aren’t required. 

General familiarity with ML is a bonus, but we'll explain concepts as they arise. Most importantly, bring curiosity about how machines learn to act in the physical world.

> [!TIP]
> **New to robotics?** This course is designed to be beginner-friendly! We start from the basics and build up to advanced concepts. If you have questions or need help, check out the [course community](https://huggingface.co/spaces/robotics-course/README/discussions) on the Hugging Face Hub.

## What tools do I need?

<!-- TODO: add tools needed -->

> [!TIP]
> **Don't have a robot?** No problem! You can follow along with simulated environments and datasets. The concepts translate directly to real hardware when you're ready.

## Learning and Assessment

This course is designed for **self-paced learning** with built-in assessments to help you track your progress.

**Course Features:**
* **Interactive quizzes** at the end of each major unit to test your understanding
* **Hands-on code examples** using LeRobot
* **Progressive difficulty** building from basic concepts to advanced techniques
* **Real-world applications** connecting theory to practical robotics problems

**No formal certification required** - the goal is to gain practical knowledge and skills in Robotics that you can apply to your own projects and research.

## What is the recommended pace?

This course is designed to be **self-paced and flexible**. Each unit should take approximately **30-45 minutes** to complete, including reading, understanding concepts, and working through code examples.

**Recommended approach:**
* **Take your time** with each concept - Robotics builds on foundational understanding
* **Try the code examples** - hands-on experience reinforces learning
* **Take the quizzes** - they help identify areas that need more review
* **Take breaks** between units to let concepts sink in

## How to get the most out of the course?

To get the most out of this robotics course, we recommend:

1. **Engage with the community**: Join the discussion [here](https://huggingface.co/spaces/robotics-course/README/discussions), explore LeRobot [documentation](https://huggingface.co/docs/lerobot/index) and connect with other learners interested in Robotics.
2. **Practice with the code examples**: The best way to understand Robotics is through hands-on experience with real datasets and algorithms.
3. **Take the quizzes seriously**: They're designed to reinforce key concepts and identify areas where you might need additional review.
4. **Explore beyond the course**: Try LeRobot examples, experiment with different datasets, and see how the concepts apply to your own interests.

## Acknowledgments

We would like to extend our gratitude to the following projects and communities:

- [LeRobot](https://github.com/huggingface/lerobot) - The open-source robotics library that powers this course
- [PyTorch](https://pytorch.org) - The deep learning framework used throughout
- The broader **robotics research community** for creating and sharing the datasets and algorithms that make Robotics possible

## I found a bug, or I want to improve the course

Contributions are **welcome** 🤗

* If you _found a bug or error_, please [open an issue](https://github.com/huggingface/robotic-course/issues/new) and **describe the problem**.
* If you _want to improve the course_, you can contribute to the robotics community through LeRobot development.
* If you _want to add content or suggest improvements_, engage with the robotics community and share your ideas.



<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit0/1.mdx" />

### Getting Started with LeRobot
https://huggingface.co/learn/robotics-course/unit0/2.md

# Getting Started with LeRobot

Throughout this course, we'll use LeRobot, Hugging Face's comprehensive open-source robotics library that democratizes access to state-of-the-art robot learning techniques. LeRobot addresses one of the biggest barriers in robotics education and research: the complexity of working with real robotic systems and data.

The library provides intuitive dataset handling that makes working with complex robotics data as straightforward as working with text or images in traditional machine learning. You'll have access to pre-trained models that serve as strong starting points for your own projects, allowing you to build upon proven algorithms rather than starting from scratch. LeRobot also supports deployment on real robotic hardware, bridging the gap between simulation and real-world applications. Perhaps most importantly, it connects you to a growing community of robot learning practitioners, from researchers at leading institutions to hobbyists building amazing projects at home.

> [!TIP]
> **Installation Preview:** We'll walk through installing LeRobot in Unit 2, but if you're eager to get started:
>
> ```bash
> pip install lerobot
> ```
>
> Check out the [LeRobot GitHub repository](https://github.com/huggingface/lerobot) for more details!

Ready to start your robot learning journey? Let's begin with understanding why classical robotics approaches have limitations and how learning can help overcome them.

<EditOnGithub source="https://github.com/huggingface/robotics-course/blob/main/units/en/unit0/2.mdx" />
