LIBERO

LIBERO is a benchmark designed to study lifelong robot learning. The idea is that robots won’t just be pretrained once in a factory, they’ll need to keep learning and adapting with their human users over time. This ongoing adaptation is called lifelong learning in decision making (LLDM), and it’s a key step toward building robots that become truly personalized helpers.

To make progress on this challenge, LIBERO provides a set of standardized tasks that focus on knowledge transfer: how well a robot can apply what it has already learned to new situations. By evaluating on LIBERO, different algorithms can be compared fairly and researchers can build on each other’s work.

LIBERO includes five task suites:

Together, these suites cover 130 tasks, ranging from simple object manipulations to complex multi-step scenarios. LIBERO is meant to grow over time, and to serve as a shared benchmark where the community can test and improve lifelong learning algorithms.

An overview of the LIBERO benchmark

Evaluating with LIBERO

At LeRobot, we ported LIBERO into our framework and used it mainly to evaluate SmolVLA, our lightweight Vision-Language-Action model.

LIBERO is now part of our multi-eval supported simulation, meaning you can benchmark your policies either on a single suite of tasks or across multiple suites at once with just a flag.

To Install LIBERO, after following LeRobot official instructions, just do: pip install -e ".[libero]"

Single-suite evaluation

Evaluate a policy on one LIBERO suite:

lerobot-eval \
  --policy.path="your-policy-id" \
  --env.type=libero \
  --env.task=libero_object \
  --eval.batch_size=2 \
  --eval.n_episodes=3

Multi-suite evaluation

Benchmark a policy across multiple suites at once:

lerobot-eval \
  --policy.path="your-policy-id" \
  --env.type=libero \
  --env.task=libero_object,libero_spatial \
  --eval.batch_size=1 \
  --eval.n_episodes=2

Policy inputs and outputs

When using LIBERO through LeRobot, policies interact with the environment via observations and actions:

We also provide a notebook for quick testing: Training with LIBERO

Training with LIBERO

When training on LIBERO tasks, make sure your dataset parquet and metadata keys follow the LeRobot convention.

The environment expects:

⚠️ Cleaning the dataset upfront is cleaner and more efficient than remapping keys inside the code. To avoid potential mismatches and key errors, we provide a preprocessed LIBERO dataset that is fully compatible with the current LeRobot codebase and requires no additional manipulation: 👉 HuggingFaceVLA/libero

For reference, here is the original dataset published by Physical Intelligence: 👉 physical-intelligence/libero


Example training command

lerobot-train \
  --policy.type=smolvla \
  --policy.repo_id=${HF_USER}/libero-test \
  --policy.load_vlm_weights=true \
  --dataset.repo_id=HuggingFaceVLA/libero \
  --env.type=libero \
  --env.task=libero_10 \
  --output_dir=./outputs/ \
  --steps=100000 \
  --batch_size=4 \
  --eval.batch_size=1 \
  --eval.n_episodes=1 \
  --eval_freq=1000 \

Note on rendering

LeRobot uses MuJoCo for simulation. You need to set the rendering backend before training or evaluation:

Reproducing π₀.₅ results

We reproduce the results of π₀.₅ on the LIBERO benchmark using the LeRobot implementation. We take the Physical Intelligence LIBERO base model (pi05_libero) and finetune for an additional 6k steps in bfloat16, with batch size of 256 on 8 H100 GPUs using the HuggingFace LIBERO dataset.

The finetuned model can be found here:

We then evaluate the finetuned model using the LeRobot LIBERO implementation, by running the following command:

python src/lerobot/scripts/eval.py \
  --output_dir=/logs/ \
  --env.type=libero \
  --env.task=libero_spatial,libero_object,libero_goal,libero_10 \
  --eval.batch_size=1 \
  --eval.n_episodes=10 \
  --policy.path=pi05_libero_finetuned \
  --policy.n_action_steps=10 \
  --output_dir=./eval_logs/ \
  --env.max_parallel_tasks=1

Note: We set n_action_steps=10, similar to the original OpenPI implementation.

Results

We obtain the following results on the LIBERO benchmark:

Model LIBERO Spatial LIBERO Object LIBERO Goal LIBERO 10 Average
π₀.₅ 97.0 99.0 98.0 96.0 97.5

These results are consistent with the original results reported by Physical Intelligence:

Model LIBERO Spatial LIBERO Object LIBERO Goal LIBERO 10 Average
π₀.₅ 98.8 98.2 98.0 92.4 96.85
Update on GitHub