Use your phone (iOS or Android) to control your robot.
In this guide you’ll learn:
To use phone to control your robot, install the relevant dependencies with:
pip install lerobot[phone]
teleop package (WebXR). When you start the Python process, it prints a local URL. Open the link on your phone, tap Start, then use Move to stream pose.Links:
teleop on PyPIB1 to enable teleoperation, release to stop. The first press captures a reference pose.Move button, release to stop. The first press captures a reference pose.A3 controls the gripper as velocity input.A and B act like increment/decrement (A opens, B closes). You can tune velocity in the GripperVelocityToJoint step.
Modify the examples to use PhoneOS.IOS or PhoneOS.ANDROID in PhoneConfig. The API is identical across platforms, only the input source differs. All examples are under examples/ and have phone_so100_*.py variants.
Teleoperation example:
from lerobot.teleoperators.phone.config_phone import PhoneConfig, PhoneOS
teleop_config = PhoneConfig(phone_os=PhoneOS.IOS) # or PhoneOS.ANDROID
teleop_device = Phone(teleop_config)When Phone(teleop_config) is created and connect() is called, calibration is prompted automatically. Hold the phone in the orientation described above, then:
B1 to capture the reference pose.Move button on the WebXR page to capture the reference pose.Why calibrate? We capture the current pose so subsequent poses are expressed in a robot aligned frame. When you again press the button to enable control, the position is recaptured to avoid drift when your phone is repositioned while it was disabled.
Run on of the examples scripts to teleoperate, record a dataset, replay a dataset or evaluate a policy.
All scripts assume you configured your robot (e.g., SO-100 follower) and set the correct serial port.
Additionally you need to copy the urdf of the robot to the examples folder. For the examples in this tutorial (Using SO100/SO101) it is highly recommended to use the urdf in the SO-ARM100 repo
Run this example to teleoperate:
python examples/phone_to_so100/teleoperate.py
After running the example:
Additionally you can customize mapping or safety limits by editing the processor steps shown in the examples. You can also remap inputs (e.g., use a different analog input) or adapt the pipeline to other robots (e.g., LeKiwi) by modifying the input and kinematics steps. More about this in the Processors for Robots and Teleoperators guide.
Run this example to record a dataset, which saves absolute end effector observations and actions:
python examples/phone_to_so100/record.py
Run this example to replay recorded episodes:
python examples/phone_to_so100/replay.py
Run this example to evaluate a pretrained policy:
python examples/phone_to_so100/evaluate.py
Kinematics are used in multiple steps. We use Placo which is a wrapper around Pinocchio for handling our kinematics. We construct the kinematics object by passing the robot’s URDF and target frame. We set target_frame_name to the gripper frame.
kinematics_solver = RobotKinematics(
urdf_path="./SO101/so101_new_calib.urdf",
target_frame_name="gripper_frame_link",
joint_names=list(robot.bus.motors.keys()),
)
The MapPhoneActionToRobotAction step converts the calibrated phone pose and inputs into target deltas and gripper commands, below is shown what the step outputs.
action["enabled"] = enabled
action["target_x"] = -pos[1] if enabled else 0.0
action["target_y"] = pos[0] if enabled else 0.0
action["target_z"] = pos[2] if enabled else 0.0
action["target_wx"] = rotvec[1] if enabled else 0.0
action["target_wy"] = rotvec[0] if enabled else 0.0
action["target_wz"] = -rotvec[2] if enabled else 0.0
action["gripper_vel"] = gripper_vel # Still send gripper action when disabledThe EEReferenceAndDelta step converts target deltas to an absolute desired EE pose, storing a reference on enable, the end_effector_step_sizes are the step sizes for the EE pose and can be modified to change the motion speed.
EEReferenceAndDelta(
kinematics=kinematics_solver,
end_effector_step_sizes={"x": 0.5, "y": 0.5, "z": 0.5},
motor_names=list(robot.bus.motors.keys()),
use_latched_reference=True,
),The EEBoundsAndSafety step clamps EE motion to a workspace and checks for large ee step jumps to ensure safety. The end_effector_bounds are the bounds for the EE pose and can be modified to change the workspace. The max_ee_step_m are the step limits for the EE pose and can be modified to change the safety limits.
EEBoundsAndSafety(
end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]},
max_ee_step_m=0.10,
)The GripperVelocityToJoint step turns a velocity‑like gripper input into absolute gripper position using the current measured state. The speed_factor is the factor by which the velocity is multiplied.
GripperVelocityToJoint(speed_factor=20.0)We use different IK initial guesses in the kinematic steps. As initial guess either the current measured joints or the previous IK solution is used.
Closed loop (used in record/eval): sets initial_guess_current_joints=True so IK starts from the measured joints each frame.
InverseKinematicsEEToJoints(
kinematics=kinematics_solver,
motor_names=list(robot.bus.motors.keys()),
initial_guess_current_joints=True, # closed loop
)Open loop (used in replay): sets initial_guess_current_joints=False so IK continues from the previous IK solution rather than the measured state. This preserves action stability when we replay without feedback.
InverseKinematicsEEToJoints(
kinematics=kinematics_solver,
motor_names=list(robot.bus.motors.keys()),
initial_guess_current_joints=False, # open loop
)action.ee.* features.initial_guess_current_joints=True is recommended for closed‑loop control; set False for open‑loop replay for stability.observation.state.ee.* from observed joints for logging and training on EE state.https instead of http, use the exact IP printed by the script and allow your browser to enter and ignore the certificate issue.MapPhoneActionToRobotAction or swap axes to match your setup.