Visual-Tactile Sensing for In-Hand Object Reconstruction

Wenqiang Xu*, Zhenjun Yu*, Han Xue, Ruolin Ye, Siqiong Yao, Cewu Lu 

(* = Equal contribution) 

[Code]   [Dataset]

Our proposed visual-tactile learning framework VTacO and its extended version VTacOH can reconstruct both the rigid and non-rigid in-hand objects. It also supports refining the mesh in an incremental manner.

Abstract

Tactile sensing is one of the modalities humans rely on heavily to perceive the world. Working with vision, this modality refines local geometry structure, measures deformation at the contact area, and indicates the hand-object contact state.  

We present VTacO for in-hand object reconstruction, and VTacOH for hand-object reconstruction. For generating simulated dataset, we propose VT-Sim, a simulation environment for hand-object interaction. Our model has prooved effective in both simulation and real-world experiments.

Overview

We consider the problem of 3D reconstruction of in-hand objects from a visual-tactile perspective. We propose VTacO, a novel deep learning-based visual-tactile learning framework, in which we combine global feature from point cloud and local feature from tactile images to reconstruct objects based on winding number field (WNF). We also extend the model to VTacOH for hand-object interact, by modeling and reconstructing MANO model.

Dataset

The object benchmarks we use are the following two:

For generating simulated dataset, we present VT-Sim, a simulation environment based on Unity to produce training samples of hand-object interaction with ground truths of WNF, visual depth images, tactile signals, and sensor poses effectively.

We use GraspIt! for hand-pose acquisition. In Unity, we model rigid objects with RigidBody and mesh collider, and for deformable objects we model them with Obi, an XPBD-based physics engine.When we move the MANO hand with sensors to the retargeted pose, the hand sensor will collide with the object and forms the grasp. 

Results

Code Release

Codes and instructions for training VTacO and VTacOH are available at VTacO.

Datasets of VTacO are available at VTacO_Dataset