In robotic grasping tasks, the manipulation of transparent objects has long been a persistent challenge. Existing methods for transparent object grasping primarily rely on visual sensors, attempting to recover features from raw visual data to achieve grasp execution. However, transparent objects exhibit unreliable visual properties, whereas tactile contact can perceive their physical properties. Moreover, these visual-based methods fail to account for variations in object standing type (OST) and weight, which hinders precise grasping of transparent objects with differing physical states. In contrast, humans naturally form memory associations between visual and tactile information of handled transparent objects and adjust grip force through tactile feedback. Inspired by this, we propose a Tactile-Enhanced Visual Grasping strategy (TEVG), a novel method that enhances robotic visual capabilities with tactile information about physical properties to achieve precise grasping of transparent objects with unknown OST and weight. TEVG framework consists of two components: pre-grasp enhancement (PE) and in-hand enhancement (IE). Pre-grasp enhancement embeds tactile features into the visual encoder during the pre-grasp phase to predict physical properties beforehand, thereby facilitating explicit identification of OST and grasp pose prediction through the tactile-enhanced visual (TEV) encoder. In-hand enhancement enables real-time adaptive adjustment of grasp force during contact manipulation to accommodate objects of unknown weight. Experimental results on the UR5 robotic platform demonstrate that TEVG significantly improves grasping accuracy and stability for transparent objects.
bottom
side
open