Upload 314-Karim.mp4
The robotic system SO-101 was meticulously assembled by our team, integrating both hardware precision and advanced software architecture. To construct a robust and diverse training dataset, we engaged multiple users with varying reaction times, skill levels, and operational styles. In total, 50 unique interaction episodes were recorded, each contributing to a richly heterogeneous dataset that captures a wide spectrum of human-robot interaction dynamics.
The demonstration video accompanying this submission showcases a policy trained for only 100 optimization steps—an intentionally limited training regime, highlighting the model’s rapid generalization capabilities. The robot was tasked with a non-trivial objective: identify and grasp an orange block placed in its vicinity, navigate to a designated blue zone across the workspace, and deposit the object.
Despite the system operating without tactile or visual feedback confirming successful grasp execution, the robot consistently proceeded to the drop zone post-interaction. This behavior underscores the learned policy's capacity to execute goal-directed navigation based on partial observability and implicit environmental cues. Impressively, the robot was able to localize the target object using onboard vision and align its manipulator for engagement—an emergent behavior arising purely from training, absent any hard-coded routines.
This outcome, although lacking closed-loop grasp validation, reveals the system’s potential for effective task execution in constrained sensory environments and lays a compelling foundation for further development in autonomous manipulation under uncertainty