GitHub: https://github.com/Ho-lab-jaist/contac
Abstract
Robotic systems employing continuum bodies offer a high degree of dexterity, which provides advantages in terms of accuracy and safety when operating in cluttered environments. However, current methods of describing posture or detecting contact for such continuum structures are focusing on bespoke designs or are limited to a single sensing modality, which could hinder their possibility for scalability and generalization. This study proposes a novel vision-based tactile sensing system, named ConTac, that provides both proprioception and tactile detection for a continuum-emulated soft skin. To realize the mentioned functions, we employ two corresponding deep-learning models trained using simulation data. The models are zero-shot applied to real-world data without fine-tuning. The experimental results show that the system could predict the posture of a skin-equipped redundant robot arm with a mean tip position error of 8.83 mm, while the mean error for touch location was 28.86 mm. We then compared the model performance on two different robot modules, proving the generalizability of the system. An admittance control strategy is then developed using the shape and contact information, allowing the robot arm to react to collisions. The proposed method shows potential in adapting to hyper-redundant or continuum robots, enhancing their perception capabilities and control paradigms.
Concept of ConTac system for sensing and control
Simulation Environment for Data Collecting
Data collect using simulation environment
Demonstration for ConTac-aware Control
A human hand touches Unit 1 or Unit 2 and obstructs the arm's movement. The robot arm alters its movement and completes the task
Shape Reconstruction of Continuum Skin
Digital twin for continuum skin of ConTac Unit
Digital twin for continuum skin of ConTac Arm
Shape estimation for a soft backbone. The ConTac sensing system is applied for a soft backbone without any calibration.