In this tutorial you will go through the full Human-in-the-Loop Sample-Efficient Reinforcement Learning (HIL-SERL) workflow using LeRobot. You will master training a policy with RL on a real robot in just a few hours.
HIL-SERL is a sample-efficient reinforcement learning algorithm that combines human demonstrations with online learning and human interventions. The approach starts from a small set of human demonstrations, uses them to train a reward classifier, and then employs an actor-learner architecture where humans can intervene during policy execution to guide exploration and correct unsafe behaviors. In this tutorial, you’ll use a gamepad to provide interventions and control the robot during the learning process.
It combines three key ingredients:
Offline demonstrations & reward classifier: a handful of human-teleop episodes plus a vision-based success detector give the policy a shaped starting point.
On-robot actor / learner loop with human interventions: a distributed Soft Actor Critic (SAC) learner updates the policy while an actor explores on the physical robot; the human can jump in at any time to correct dangerous or unproductive behaviour.
Safety & efficiency tools: joint/end-effector (EE) bounds, crop region of interest (ROI) preprocessing and WandB monitoring keep the data useful and the hardware safe.
Together these elements let HIL-SERL reach near-perfect task success and faster cycle times than imitation-only baselines.

HIL-SERL workflow, Luo et al. 2024
This guide provides step-by-step instructions for training a robot policy using LeRobot’s HilSerl implementation to train on a real robot.
lerobot/model/kinematics.py)One can use HIL-SERL to train on a variety of manipulation tasks. Some recommendations:
To install LeRobot with HIL-SERL, you need to install the hilserl extra.
pip install -e ".[hilserl]"The training process begins with proper configuration for the HILSerl environment. The main configuration class is GymManipulatorConfig in lerobot/rl/gym_manipulator.py, which contains nested HILSerlRobotEnvConfig and DatasetConfig. The configuration is organized into focused, nested sub-configs:
class GymManipulatorConfig:
env: HILSerlRobotEnvConfig # Environment configuration (nested)
dataset: DatasetConfig # Dataset recording/replay configuration (nested)
mode: str | None = None # "record", "replay", or None (for training)
device: str = "cpu" # Compute device
class HILSerlRobotEnvConfig(EnvConfig):
robot: RobotConfig | None = None # Main robot agent (defined in `lerobot/robots`)
teleop: TeleoperatorConfig | None = None # Teleoperator agent, e.g., gamepad or leader arm
processor: HILSerlProcessorConfig # Processing pipeline configuration (nested)
name: str = "real_robot" # Environment name
task: str | None = None # Task identifier
fps: int = 10 # Control frequency
# Nested processor configuration
class HILSerlProcessorConfig:
control_mode: str = "gamepad" # Control mode
observation: ObservationConfig | None = None # Observation processing settings
image_preprocessing: ImagePreprocessingConfig | None = None # Image crop/resize settings
gripper: GripperConfig | None = None # Gripper control and penalty settings
reset: ResetConfig | None = None # Environment reset and timing settings
inverse_kinematics: InverseKinematicsConfig | None = None # IK processing settings
reward_classifier: RewardClassifierConfig | None = None # Reward classifier settings
max_gripper_pos: float | None = 100.0 # Maximum gripper position
# Sub-configuration classes
class ObservationConfig:
add_joint_velocity_to_observation: bool = False # Add joint velocities to state
add_current_to_observation: bool = False # Add motor currents to state
display_cameras: bool = False # Display camera feeds during execution
class ImagePreprocessingConfig:
crop_params_dict: dict[str, tuple[int, int, int, int]] | None = None # Image cropping parameters
resize_size: tuple[int, int] | None = None # Target image size
class GripperConfig:
use_gripper: bool = True # Enable gripper control
gripper_penalty: float = 0.0 # Penalty for inappropriate gripper usage
class ResetConfig:
fixed_reset_joint_positions: Any | None = None # Joint positions for reset
reset_time_s: float = 5.0 # Time to wait during reset
control_time_s: float = 20.0 # Maximum episode duration
terminate_on_success: bool = True # Whether to terminate episodes on success detection
class InverseKinematicsConfig:
urdf_path: str | None = None # Path to robot URDF file
target_frame_name: str | None = None # End-effector frame name
end_effector_bounds: dict[str, list[float]] | None = None # EE workspace bounds
end_effector_step_sizes: dict[str, float] | None = None # EE step sizes per axis
class RewardClassifierConfig:
pretrained_path: str | None = None # Path to pretrained reward classifier
success_threshold: float = 0.5 # Success detection threshold
success_reward: float = 1.0 # Reward value for successful episodes
# Dataset configuration
class DatasetConfig:
repo_id: str # LeRobot dataset repository ID
task: str # Task identifier
root: str | None = None # Local dataset root directory
num_episodes_to_record: int = 5 # Number of episodes for recording
replay_episode: int | None = None # Episode index for replay
push_to_hub: bool = False # Whether to push datasets to HubHIL-SERL uses a modular processor pipeline architecture that processes robot observations and actions through a series of composable steps. The pipeline is divided into two main components:
The environment processor (env_processor) handles incoming observations and environment state:
The action processor (action_processor) handles outgoing actions and human interventions:
Basic Observation Processing:
{
"env": {
"processor": {
"observation": {
"add_joint_velocity_to_observation": true,
"add_current_to_observation": false,
"display_cameras": false
}
}
}
}Image Processing:
{
"env": {
"processor": {
"image_preprocessing": {
"crop_params_dict": {
"observation.images.front": [180, 250, 120, 150],
"observation.images.side": [180, 207, 180, 200]
},
"resize_size": [128, 128]
}
}
}
}Inverse Kinematics Setup:
{
"env": {
"processor": {
"inverse_kinematics": {
"urdf_path": "path/to/robot.urdf",
"target_frame_name": "end_effector",
"end_effector_bounds": {
"min": [0.16, -0.08, 0.03],
"max": [0.24, 0.2, 0.1]
},
"end_effector_step_sizes": {
"x": 0.02,
"y": 0.02,
"z": 0.02
}
}
}
}
}The HIL-SERL framework supports additional observation processing features that can improve policy learning:
Enable joint velocity estimation to provide the policy with motion information:
{
"env": {
"processor": {
"observation": {
"add_joint_velocity_to_observation": true
}
}
}
}This processor:
Monitor motor currents to detect contact forces and load conditions:
{
"env": {
"processor": {
"observation": {
"add_current_to_observation": true
}
}
}
}This processor:
You can enable multiple observation processing features simultaneously:
{
"env": {
"processor": {
"observation": {
"add_joint_velocity_to_observation": true,
"add_current_to_observation": true,
"display_cameras": false
}
}
}
}Note: Enabling additional observation features increases the state space dimensionality, which may require adjusting your policy network architecture and potentially collecting more training data.
Before collecting demonstrations, you need to determine the appropriate operational bounds for your robot.
This helps simplify the problem of learning on the real robot in two ways: 1) by limiting the robot’s operational space to a specific region that solves the task and avoids unnecessary or unsafe exploration, and 2) by allowing training in end-effector space rather than joint space. Empirically, learning in joint space for reinforcement learning in manipulation is often a harder problem - some tasks are nearly impossible to learn in joint space but become learnable when the action space is transformed to end-effector coordinates.
Using lerobot-find-joint-limits
This script helps you find the safe operational bounds for your robot’s end-effector. Given that you have a follower and leader arm, you can use the script to find the bounds for the follower arm that will be applied during training. Bounding the action space will reduce the redundant exploration of the agent and guarantees safety.
lerobot-find-joint-limits \ --robot.type=so100_follower \ --robot.port=/dev/tty.usbmodem58760431541 \ --robot.id=black \ --teleop.type=so100_leader \ --teleop.port=/dev/tty.usbmodem58760431551 \ --teleop.id=blue
Workflow
Max ee position [0.2417 0.2012 0.1027]
Min ee position [0.1663 -0.0823 0.0336]
Max joint positions [-20.0, -20.0, -20.0, -20.0, -20.0, -20.0]
Min joint positions [50.0, 50.0, 50.0, 50.0, 50.0, 50.0]end_effector_bounds fieldExample Configuration
"end_effector_bounds": {
"max": [0.24, 0.20, 0.10],
"min": [0.16, -0.08, 0.03]
}With the bounds defined, you can safely collect demonstrations for training. Training RL with off-policy algorithm allows us to use offline datasets collected in order to improve the efficiency of the learning process.
Setting Up Record Mode
Create a configuration file for recording demonstrations (or edit an existing one like env_config.json):
mode to "record" at the root levelrepo_id for your dataset in the dataset section (e.g., “username/task_name”)num_episodes_to_record in the dataset section to the number of demonstrations you want to collectenv.processor.image_preprocessing.crop_params_dict to {} initially (we’ll determine crops later)env.robot, env.teleop, and other hardware settings in the env sectionExample configuration section:
{
"env": {
"type": "gym_manipulator",
"name": "real_robot",
"fps": 10,
"processor": {
"control_mode": "gamepad",
"observation": {
"display_cameras": false
},
"image_preprocessing": {
"crop_params_dict": {},
"resize_size": [128, 128]
},
"gripper": {
"use_gripper": true,
"gripper_penalty": 0.0
},
"reset": {
"reset_time_s": 5.0,
"control_time_s": 20.0
}
},
"robot": {
// ... robot configuration ...
},
"teleop": {
// ... teleoperator configuration ...
}
},
"dataset": {
"repo_id": "username/pick_lift_cube",
"root": null,
"task": "pick_and_lift",
"num_episodes_to_record": 15,
"replay_episode": 0,
"push_to_hub": true
},
"mode": "record",
"device": "cpu"
}Along with your robot, you will need a teleoperation device to control it in order to collect datasets of your task and perform interventions during the online training. We support using a gamepad or a keyboard or the leader arm of the robot.
HIL-Serl learns actions in the end-effector space of the robot. Therefore, the teleoperation will control the end-effector’s x,y,z displacements.
For that we need to define a version of the robot that takes actions in the end-effector space. Check the robot class SO100FollowerEndEffector and its configuration SO100FollowerEndEffectorConfig for the default parameters related to the end-effector space.
class SO100FollowerEndEffectorConfig(SO100FollowerConfig):
"""Configuration for the SO100FollowerEndEffector robot."""
# Default bounds for the end-effector position (in meters)
end_effector_bounds: dict[str, list[float]] = field( # bounds for the end-effector in x,y,z direction
default_factory=lambda: {
"min": [-1.0, -1.0, -1.0], # min x, y, z
"max": [1.0, 1.0, 1.0], # max x, y, z
}
)
max_gripper_pos: float = 50 # maximum gripper position that the gripper will be open at
end_effector_step_sizes: dict[str, float] = field( # maximum step size for the end-effector in x,y,z direction
default_factory=lambda: {
"x": 0.02,
"y": 0.02,
"z": 0.02,
}
)The Teleoperator defines the teleoperation device. You can check the list of available teleoperators in lerobot/teleoperators.
Setting up the Gamepad
The gamepad provides a very convenient way to control the robot and the episode state.
To setup the gamepad, you need to set the control_mode to "gamepad" and define the teleop section in the configuration file.
{
"env": {
"teleop": {
"type": "gamepad",
"use_gripper": true
},
"processor": {
"control_mode": "gamepad",
"gripper": {
"use_gripper": true
}
}
}
}
Gamepad button mapping for robot control and episode management
Setting up the SO101 leader
The SO101 leader arm has reduced gears that allows it to move and track the follower arm during exploration. Therefore, taking over is much smoother than the gearless SO100.
To setup the SO101 leader, you need to set the control_mode to "leader" and define the teleop section in the configuration file.
{
"env": {
"teleop": {
"type": "so101_leader",
"port": "/dev/tty.usbmodem585A0077921",
"use_degrees": true
},
"processor": {
"control_mode": "leader",
"gripper": {
"use_gripper": true
}
}
}
}In order to annotate the success/failure of the episode, you will need to use a keyboard to press s for success, esc for failure.
During the online training, press space to take over the policy and space again to give the control back to the policy.
SO101 leader teleoperation example, the leader tracks the follower, press `space` to intervene
Recording Demonstrations
Start the recording process, an example of the config file can be found here:
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config_so100.json
During recording:
env.processor.reset.fixed_reset_joint_positionsAfter collecting demonstrations, process them to determine optimal camera crops. Reinforcement learning is sensitive to background distractions, so it is important to crop the images to the relevant workspace area.
Visual RL algorithms learn directly from pixel inputs, making them vulnerable to irrelevant visual information. Background elements like changing lighting, shadows, people moving, or objects outside the workspace can confuse the learning process. Good ROI selection should:
Note: If you already know the crop parameters, you can skip this step and just set the crop_params_dict in the configuration file during recording.
Determining Crop Parameters
Use the crop_dataset_roi.py script to interactively select regions of interest in your camera images:
python -m lerobot.rl.crop_dataset_roi --repo-id username/pick_lift_cube
Example output:
Selected Rectangular Regions of Interest (top, left, height, width):
observation.images.side: [180, 207, 180, 200]
observation.images.front: [180, 250, 120, 150]
Interactive cropping tool for selecting regions of interest
Updating Configuration
Add these crop parameters to your training configuration:
{
"env": {
"processor": {
"image_preprocessing": {
"crop_params_dict": {
"observation.images.side": [180, 207, 180, 200],
"observation.images.front": [180, 250, 120, 150]
},
"resize_size": [128, 128]
}
}
}
}Recommended image resolution
Most vision-based policies have been validated on square inputs of either 128×128 (default) or 64×64 pixels. We therefore advise setting the resize_size parameter to [128, 128] – or [64, 64] if you need to save GPU memory and bandwidth. Other resolutions are possible but have not been extensively tested.
The reward classifier plays an important role in the HIL-SERL workflow by automating reward assignment and automatically detecting episode success. Instead of manually defining reward functions or relying on human feedback for every timestep, the reward classifier learns to predict success/failure from visual observations. This enables the RL algorithm to learn efficiently by providing consistent and automated reward signals based on the robot’s camera inputs.
This guide explains how to train a reward classifier for human-in-the-loop reinforcement learning implementation of LeRobot. Reward classifiers learn to predict the reward value given a state which can be used in an RL setup to train a policy.
Note: Training a reward classifier is optional. You can start the first round of RL experiments by annotating the success manually with your gamepad or keyboard device.
The reward classifier implementation in modeling_classifier.py uses a pretrained vision model to process the images. It can output either a single value for binary rewards to predict success/fail cases or multiple values for multi-class settings.
Collecting a Dataset for the reward classifier
Before training, you need to collect a dataset with labeled examples. The record_dataset function in gym_manipulator.py enables the process of collecting a dataset of observations, actions, and rewards.
To collect a dataset, you need to modify some parameters in the environment configuration based on HILSerlRobotEnvConfig.
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/reward_classifier_train_config.json
Key Parameters for Data Collection
"record" to collect a dataset (at root level)"hf_username/dataset_name", name of the dataset and repo on the hubtrue)The env.processor.reset.terminate_on_success parameter allows you to control episode termination behavior. When set to false, episodes will continue even after success is detected, allowing you to collect more positive examples with the reward=1 label. This is crucial for training reward classifiers as it provides more success state examples in your dataset. When set to true (default), episodes terminate immediately upon success detection.
Important: For reward classifier training, set terminate_on_success: false to collect sufficient positive examples. For regular HIL-SERL training, keep it as true to enable automatic episode termination when the task is completed successfully.
Example configuration section for data collection:
{
"env": {
"type": "gym_manipulator",
"name": "real_robot",
"fps": 10,
"processor": {
"reset": {
"reset_time_s": 5.0,
"control_time_s": 20.0,
"terminate_on_success": false
},
"gripper": {
"use_gripper": true
}
},
"robot": {
// ... robot configuration ...
},
"teleop": {
// ... teleoperator configuration ...
}
},
"dataset": {
"repo_id": "hf_username/dataset_name",
"dataset_root": "data/your_dataset",
"task": "reward_classifier_task",
"num_episodes_to_record": 20,
"replay_episode": null,
"push_to_hub": true
},
"mode": "record",
"device": "cpu"
}Reward Classifier Configuration
The reward classifier is configured using configuration_classifier.py. Here are the key parameters:
"helper2424/resnet10")"cnn" or "transformer"Example configuration for training the reward classifier:
{
"policy": {
"type": "reward_classifier",
"model_name": "helper2424/resnet10",
"model_type": "cnn",
"num_cameras": 2,
"num_classes": 2,
"hidden_dim": 256,
"dropout_rate": 0.1,
"learning_rate": 1e-4,
"device": "cuda",
"use_amp": true,
"input_features": {
"observation.images.front": {
"type": "VISUAL",
"shape": [3, 128, 128]
},
"observation.images.side": {
"type": "VISUAL",
"shape": [3, 128, 128]
}
}
}
}Training the Classifier
To train the classifier, use the train.py script with your configuration:
lerobot-train --config_path path/to/reward_classifier_train_config.json
Deploying and Testing the Model
To use your trained reward classifier, configure the HILSerlRobotEnvConfig to use your model:
config = GymManipulatorConfig(
env=HILSerlRobotEnvConfig(
processor=HILSerlProcessorConfig(
reward_classifier=RewardClassifierConfig(
pretrained_path="path_to_your_pretrained_trained_model"
)
),
# Other environment parameters
),
dataset=DatasetConfig(...),
mode=None # For training
)or set the argument in the json config file.
{
"env": {
"processor": {
"reward_classifier": {
"pretrained_path": "path_to_your_pretrained_model",
"success_threshold": 0.7,
"success_reward": 1.0
},
"reset": {
"terminate_on_success": true
}
}
}
}Run gym_manipulator.py to test the model.
python -m lerobot.rl.gym_manipulator --config_path path/to/env_config.json
The reward classifier will automatically provide rewards based on the visual input from the robot’s cameras.
Example Workflow for training the reward classifier
Create the configuration files: Create the necessary json configuration files for the reward classifier and the environment. Check the examples here.
Collect a dataset:
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
Train the classifier:
lerobot-train --config_path src/lerobot/configs/reward_classifier_train_config.json
Test the classifier:
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
The LeRobot system uses a distributed actor-learner architecture for training. This architecture decouples robot interactions from the learning process, allowing them to run concurrently without blocking each other. The actor server handles robot observations and actions, sending interaction data to the learner server. The learner server performs gradient descent and periodically updates the actor’s policy weights. You will need to start two processes: a learner and an actor.
Configuration Setup
Create a training configuration file (example available here). The training config is based on the main TrainRLServerPipelineConfig class in lerobot/configs/train.py.
type="sac", device, etc.)dataset to your cropped datasetpolicy config is correct with the right input_features and output_features for your task.Starting the Learner
First, start the learner server process:
python -m lerobot.rl.learner --config_path src/lerobot/configs/train_config_hilserl_so100.json
The learner:
gRPC server to communicate with actorsStarting the Actor
In a separate terminal, start the actor process with the same configuration:
python -m lerobot.rl.actor --config_path src/lerobot/configs/train_config_hilserl_so100.json
The actor:
gRPCTraining Flow
The training proceeds automatically:
Human in the Loop
space key on the keyboard). This will pause the policy actions and allow you to take over.wandb dashboard.
Example showing how human interventions help guide policy learning over time
Monitoring and Debugging
If you have wandb.enable set to true in your configuration, you can monitor training progress in real-time through the Weights & Biases dashboard.
The learning process is very sensitive to the intervention strategy. It will takes a few runs to understand how to intervene effectively. Some tips and hints:
The ideal behaviour is that your intervention rate should drop gradually during training as shown in the figure below.

Plot of the intervention rate during a training run on a pick and lift cube task
Some configuration values have a disproportionate impact on training stability and speed:
temperature_init (policy.temperature_init) – initial entropy temperature in SAC. Higher values encourage more exploration; lower values make the policy more deterministic early on. A good starting point is 1e-2. We observed that setting it too high can make human interventions ineffective and slow down learning.policy_parameters_push_frequency (policy.actor_learner_config.policy_parameters_push_frequency) – interval in seconds between two weight pushes from the learner to the actor. The default is 4 s. Decrease to 1-2 s to provide fresher weights (at the cost of more network traffic); increase only if your connection is slow, as this will reduce sample efficiency.storage_device (policy.storage_device) – device on which the learner keeps the policy parameters. If you have spare GPU memory, set this to "cuda" (instead of the default "cpu"). Keeping the weights on-GPU removes CPU→GPU transfer overhead and can significantly increase the number of learner updates per second.Congrats 🎉, you have finished this tutorial!
If you have any questions or need help, please reach out on Discord.
Paper citation:
@article{luo2024precise,
title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
author={Luo, Jianlan and Xu, Charles and Wu, Jeffrey and Levine, Sergey},
journal={arXiv preprint arXiv:2410.21845},
year={2024}
}