Recent advances in robotics and Physical AI are driving the development of intelligent machines capable of mimicking or augmenting human abilities in the physical world. Among these abilities, object manipulation using human hands represents one of the most fundamental yet challenging capabilities to replicate with robotic systems.
Human hands are extraordinary biological systems that integrate complex sensing and actuation mechanisms. Through the seamless fusion of vision, tactile sensing, and proprioceptive feedback, humans can manipulate a wide variety of objects with remarkable dexterity. Even in contact-rich situations, humans can perform stable, precise, and adaptive manipulation under uncertainty, such as tying knots, assembling parts, routing cables, or handling deformable materials.
Replicating this level of dexterity in robotic systems remains a major challenge. Unlike humans, most robotic manipulators are designed to operate in structured environments and rely heavily on predefined instructions. As a result, they struggle with tasks involving uncertainty, deformable objects, or complex contact interactions.
Research in SurGLab aims to develop robotic technologies that enable human-level dexterity, precision, and adaptability in manipulation. By integrating perception, control, and learning, we seek to build robotic systems capable of robust object manipulation in real-world environments.
Beyond replicating human capability, we are also interested in enabling robots to perform ultra-precision manipulation tasks that exceed human physical limits. For example, robotic systems can potentially operate at scales of tens to hundreds of micrometers, enabling manipulation tasks that are beyond the limits of human perception, motor control, and tactile sensing.
Through these efforts, our goal is to develop robotic manipulation systems that are both human-like in adaptability and superhuman in precision, opening new possibilities for applications ranging from advanced manufacturing and surgical robotics to micro-scale assembly.
Real-time visual servoing, robust object and robot pose estimation under occlusion, 3D reconstruction of deformable objects.
Contact-aware manipulation, rope and cable manipulation, automated knot-tying, stable manipulation under uncertain contact conditions.
Optimized grasp planning, bimanual manipulation and coordination, object handover strategies, manipulation in confined environments.
Precision assembly, micro-scale manipulation (tens to hundreds of micrometers), robot and hand–eye calibration.