π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
π₀.₅ represents a significant evolution from π₀, developed by Physical Intelligence to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
As Physical Intelligence explains, the fundamental challenge isn’t performing tasks of agility or dexterity, but generalization, the ability to correctly perform tasks in new settings with new objects. Consider a robot cleaning different homes: each home has different objects in different places. Generalization must occur at multiple levels:
The breakthrough innovation in π₀.₅ is co-training on heterogeneous data sources. The model learns from:
This diverse training mixture creates a “curriculum” that enables generalization across physical, visual, and semantic levels simultaneously.
Install LeRobot by following our Installation Guide.
Install Pi0.5 dependencies by running:
pip install -e ".[pi]"To use π₀.₅ in your LeRobot configuration, specify the policy type as:
policy.type=pi05Here’s a complete training command for finetuning the base π₀.₅ model on your own dataset:
python src/lerobot/scripts/lerobot_train.py\
--dataset.repo_id=your_dataset \
--policy.type=pi05 \
--output_dir=./outputs/pi05_training \
--job_name=pi05_training \
--policy.repo_id=your_repo_id \
--policy.pretrained_path=lerobot/pi05_base \
--policy.compile_model=true \
--policy.gradient_checkpointing=true \
--wandb.enable=true \
--policy.dtype=bfloat16 \
--steps=3000 \
--policy.device=cuda \
--batch_size=32--policy.compile_model=true: Enables model compilation for faster training--policy.gradient_checkpointing=true: Reduces memory usage significantly during training--policy.dtype=bfloat16: Use mixed precision training for efficiency--batch_size=32: Batch size for training, adapt this based on your GPU memory--policy.pretrained_path=lerobot/pi05_base: The base π₀.₅ model you want to finetune, options are:
If your dataset is not converted with quantiles, you can convert it with the following command:
python src/lerobot/datasets/v30/augment_dataset_quantile_stats.py \
--repo-id=your_dataset \Or train pi05 with this normalization mapping: --policy.normalization_mapping='{"ACTION": "MEAN_STD", "STATE": "MEAN_STD", "VISUAL": "IDENTITY"}'
π₀.₅ has demonstrated strong performance on the Libero benchmark suite. To compare and test its LeRobot implementation, we finetuned the libero base model for an additional 6k steps on the Libero dataset and compared the results to the OpenPI reference results.
| Benchmark | LeRobot Implementation | OpenPI Reference |
|---|---|---|
| Libero Spatial | 97.0% | 98.8% |
| Libero Object | 99.0% | 98.2% |
| Libero Goal | 98.0% | 98.0% |
| Libero 10 | 96.0% | 92.4% |
| Average | 97.5% | 96.85% |
These results demonstrate π₀.₅’s strong generalization capabilities across diverse robotic manipulation tasks. To reproduce these results, you can follow the instructions in the Libero section.
This model follows the Apache 2.0 License, consistent with the original OpenPI repository.
Update on GitHub