In this section, we step from kinematics to control. We first show how to reason with velocities (differential inverse kinematics, diff-IK), then close the loop with feedback, and finally summarize where classical pipelines struggle in practice.
Instead of solving for joint positions directly, we can work with velocities:
If we know the relationship between joint velocities and end-effector velocities, we can control motion more smoothly:
Where is the Jacobian matrix - the relationship between joint and task space velocities.
Given a desired end-effector velocity $\dot{p}^*$, find joint velocities:
Where is the pseudo-inverse of the Jacobian.
Open-loop tracking is brittle under modeling errors and disturbances. We close the loop by feeding back the tracking error.
Dealing with moving obstacles requires feedback control.
Real environments are dynamic and uncertain. We need feedback to handle:
Combine desired motion with error correction:
Where is the position error.
Start with small and increase gradually while monitoring oscillations. Use a watchdog (safety stop) and saturate commands to keep the system within safe limits.
With differential reasoning and feedback, many tracking tasks are solvable—on paper. In practice, the system still breaks under real-world complexity for the reasons below.
Four key limitations of dynamics-based robotics approaches.
Classical pipelines are built from separate modules:
Problems:
Traditional methods struggle with:
Real-world physics is complex:
Classical methods don’t leverage:
To address these limitations, we contrast a classical modular pipeline with an end-to-end learning policy.
Classical Robotics Approach:
Perception → State Estimation → Planning → Control → ActuationChallenges:
This is the promise of robot learning!
The Best of Both Worlds: Modern robot learning often combines classical insights with learning. For example one can combine learning with safety constraints from control theory
Pure learning vs pure classical is a false dichotomy - hybrid approaches have had their successes
Up next, we’ll show how learning-based methods (reinforcement learning and imitation learning) absorb some of this complexity by optimizing directly from data.
For a full list of references, check out the tutorial.
Feedback Systems: An Introduction for Scientists and Engineers (2008)
Karl Johan Åström and Richard M. Murray
A comprehensive introduction to feedback control systems, covering the principles that underlie closed-loop control in robotics.
Book Website
Real-Time Obstacle Avoidance for Manipulators and Mobile Robots (1986)
Oussama Khatib
A seminal paper introducing the artificial potential field method for obstacle avoidance, demonstrating how feedback can be used for reactive control in dynamic environments.
DOI:10.1177/027836498600500106