π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
π₀ represents a breakthrough in robotics as the first general-purpose robot foundation model developed by Physical Intelligence. Unlike traditional robot programs that are narrow specialists programmed for repetitive motions, π₀ is designed to be a generalist policy that can understand visual inputs, interpret natural language instructions, and control a variety of different robots across diverse tasks.
As described by Physical Intelligence, while AI has achieved remarkable success in digital domains, from chess-playing to drug discovery, human intelligence still dramatically outpaces AI in the physical world. To paraphrase Moravec’s paradox, winning a game of chess represents an “easy” problem for AI, but folding a shirt or cleaning up a table requires solving some of the most difficult engineering problems ever conceived. π₀ represents a first step toward developing artificial physical intelligence that enables users to simply ask robots to perform any task they want, just like they can with large language models.
π₀ combines several key innovations:
Install LeRobot by following our Installation Guide.
Install Pi0 dependencies by running:
pip install -e ".[pi]"π₀ is trained on the largest robot interaction dataset to date, combining three key data sources:
To use π₀ in LeRobot, specify the policy type as:
policy.type=pi0For training π₀, you can use the standard LeRobot training script with the appropriate configuration:
python src/lerobot/scripts/lerobot_train.py \
--dataset.repo_id=your_dataset \
--policy.type=pi0 \
--output_dir=./outputs/pi0_training \
--job_name=pi0_training \
--policy.pretrained_path=lerobot/pi0_base \
--policy.repo_id=your_repo_id \
--policy.compile_model=true \
--policy.gradient_checkpointing=true \
--policy.dtype=bfloat16 \
--steps=3000 \
--policy.device=cuda \
--batch_size=32--policy.compile_model=true: Enables model compilation for faster training--policy.gradient_checkpointing=true: Reduces memory usage significantly during training--policy.dtype=bfloat16: Use mixed precision training for efficiency--batch_size=32: Batch size for training, adapt this based on your GPU memory--policy.pretrained_path=lerobot/pi0_base: The base π₀ model you want to finetune, options are:
This model follows the Apache 2.0 License, consistent with the original OpenPI repository.
Update on GitHub