LIBERO is a benchmark designed to study lifelong robot learning. The idea is that robots won’t just be pretrained once in a factory, they’ll need to keep learning and adapting with their human users over time. This ongoing adaptation is called lifelong learning in decision making (LLDM), and it’s a key step toward building robots that become truly personalized helpers.
To make progress on this challenge, LIBERO provides a set of standardized tasks that focus on knowledge transfer: how well a robot can apply what it has already learned to new situations. By evaluating on LIBERO, different algorithms can be compared fairly and researchers can build on each other’s work.
LIBERO includes five task suites:
libero_spatial) – tasks that require reasoning about spatial relations.libero_object) – tasks centered on manipulating different objects.libero_goal) – goal-conditioned tasks where the robot must adapt to changing targets.libero_90) – 90 short-horizon tasks from the LIBERO-100 collection.libero_10) – 10 long-horizon tasks from the LIBERO-100 collection.Together, these suites cover 130 tasks, ranging from simple object manipulations to complex multi-step scenarios. LIBERO is meant to grow over time, and to serve as a shared benchmark where the community can test and improve lifelong learning algorithms.

At LeRobot, we ported LIBERO into our framework and used it mainly to evaluate SmolVLA, our lightweight Vision-Language-Action model.
LIBERO is now part of our multi-eval supported simulation, meaning you can benchmark your policies either on a single suite of tasks or across multiple suites at once with just a flag.
To Install LIBERO, after following LeRobot official instructions, just do:
pip install -e ".[libero]"
Evaluate a policy on one LIBERO suite:
lerobot-eval \
--policy.path="your-policy-id" \
--env.type=libero \
--env.task=libero_object \
--eval.batch_size=2 \
--eval.n_episodes=3--env.task picks the suite (libero_object, libero_spatial, etc.).--eval.batch_size controls how many environments run in parallel.--eval.n_episodes sets how many episodes to run in total.Benchmark a policy across multiple suites at once:
lerobot-eval \
--policy.path="your-policy-id" \
--env.type=libero \
--env.task=libero_object,libero_spatial \
--eval.batch_size=1 \
--eval.n_episodes=2--env.task for multi-suite evaluation.When using LIBERO through LeRobot, policies interact with the environment via observations and actions:
Observations
observation.state – proprioceptive features (agent state).observation.images.image – main camera view (agentview_image).observation.images.image2 – wrist camera view (robot0_eye_in_hand_image).⚠️ Note: LeRobot enforces the .images.* prefix for any multi-modal visual features. Always ensure that your policy config input_features use the same naming keys, and that your dataset metadata keys follow this convention during evaluation.
If your data contains different keys, you must rename the observations to match what the policy expects, since naming keys are encoded inside the normalization statistics layer.
This will be fixed with the upcoming Pipeline PR.
Actions
Box(-1, 1, shape=(7,)) space.We also provide a notebook for quick testing: Training with LIBERO
When training on LIBERO tasks, make sure your dataset parquet and metadata keys follow the LeRobot convention.
The environment expects:
observation.state → 8-dim agent stateobservation.images.image → main camera (agentview_image)observation.images.image2 → wrist camera (robot0_eye_in_hand_image)⚠️ Cleaning the dataset upfront is cleaner and more efficient than remapping keys inside the code. To avoid potential mismatches and key errors, we provide a preprocessed LIBERO dataset that is fully compatible with the current LeRobot codebase and requires no additional manipulation: 👉 HuggingFaceVLA/libero
For reference, here is the original dataset published by Physical Intelligence: 👉 physical-intelligence/libero
lerobot-train \
--policy.type=smolvla \
--policy.repo_id=${HF_USER}/libero-test \
--policy.load_vlm_weights=true \
--dataset.repo_id=HuggingFaceVLA/libero \
--env.type=libero \
--env.task=libero_10 \
--output_dir=./outputs/ \
--steps=100000 \
--batch_size=4 \
--eval.batch_size=1 \
--eval.n_episodes=1 \
--eval_freq=1000 \LeRobot uses MuJoCo for simulation. You need to set the rendering backend before training or evaluation:
export MUJOCO_GL=egl → for headless servers (e.g. HPC, cloud)We reproduce the results of π₀.₅ on the LIBERO benchmark using the LeRobot implementation. We take the Physical Intelligence LIBERO base model (pi05_libero) and finetune for an additional 6k steps in bfloat16, with batch size of 256 on 8 H100 GPUs using the HuggingFace LIBERO dataset.
The finetuned model can be found here:
We then evaluate the finetuned model using the LeRobot LIBERO implementation, by running the following command:
python src/lerobot/scripts/eval.py \ --output_dir=/logs/ \ --env.type=libero \ --env.task=libero_spatial,libero_object,libero_goal,libero_10 \ --eval.batch_size=1 \ --eval.n_episodes=10 \ --policy.path=pi05_libero_finetuned \ --policy.n_action_steps=10 \ --output_dir=./eval_logs/ \ --env.max_parallel_tasks=1
Note: We set n_action_steps=10, similar to the original OpenPI implementation.
We obtain the following results on the LIBERO benchmark:
| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
|---|---|---|---|---|---|
| π₀.₅ | 97.0 | 99.0 | 98.0 | 96.0 | 97.5 |
These results are consistent with the original results reported by Physical Intelligence:
| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
|---|---|---|---|---|---|
| π₀.₅ | 98.8 | 98.2 | 98.0 | 92.4 | 96.85 |