Bi-Touch: Bimanual Tactile Manipulation with Sim-to-Real Deep Reinforcement Learning

Yijiong Lin, Alex Church, Max Yang, Haoran Li, John Lloyd, Dandan Zhang, Nathan F. Lepora

Department of  Engineering  Mathematics and  Bristol  Robotics  Laboratory,  University of  Bristol,  Bristol,  U.K.


Email: yijiong.lin@bristol.ac.uk


Arxiv 🔗     Code (New) 🔗

Introduction

Bimanual manipulation with tactile feedback will be key to human-level robot dexterity. Here we introduce a dual-arm tactile robotic system (Bi-Touch) based on the Tactile Gym 2.0 setup that integrates two affordable industrial-level robot arms with low-cost high-resolution tactile sensors. 

We present a suite of bimanual manipulation tasks tailored towards tactile feedback: bi-pushing, bi-reorienting, and bi-gathering. To learn effective policies, we introduce appropriate reward functions for these tasks and propose a novel goal-update mechanism with deep reinforcement learning. We also apply these policies to real-world settings with a tactile sim-to-real approach. Our analysis highlights and addresses some challenges met during the sim-to-real application. Finally, we demonstrate the generalizability and robustness of this system by experimenting with different unseen objects with applied perturbations in the real world.


  Arxiv 🔗    •    Code (New)🔗

Overview of the proposed dual-arm tactile robotic system (Bi-Touch) with sim-to-real deep RL. a) Deep RL is applied to learn policies for three simulated bimanual tactile manipulation tasks (red arrows show desired displacements) using Tactile Gym. b) Real-to-sim tactile image generator learned for the surface feature. c) The real-world evaluation feeds real tactile images through the generator into the RL policy concatenated with proprioceptive information.

Bi-Touch in Tactile Gym 2.0 (Simulation)

Bi-Pushing

Bi-Reorienting

Bi-Gathering

Bi-Lifting

The Real-world Bi-Touch System (Sim2Real)


The performance of our low-cost sim-to-real deep RL dual-arm tactile robot system was evaluated in these four bimanual tasks in the real world. We introduced appropriate reward functions for these tasks in simulation, then investigated how these policies apply to the real world. The experimental results show that the developed dual-arm tactile system is effective for all tasks on real objects unseen in the simulation learning.

Bi-Lifting

Crispy Objects

Adaptability

Soft Objects

Stiff Objects

Bi-Gathering

Foam Toy and Spam Can

Apple and Can

Mug and Triangle Prism

Bi-Reorienting

Reorient Various Objects

Special Cases

Bi-Pushing

See also: Tactile Gym 2.0

High-resolution optical tactile sensors are increasingly used in robotic learning environments due to their ability to capture large amounts of data directly relating to agent-environment interaction. We extend the Tactile Gym simulator to include three new optical tactile sensors (TacTip, DIGIT, and DigiTac) of the two most popular types, Gelsight-style (image-shading based) and TacTip-style (marker-based). We demonstrate that a single sim-to-real approach can be used with these three different sensors to achieve strong real-world performance despite the significant differences between real tactile images. Additionally, we lower the barrier of entry to the proposed tasks by adapting them to an inexpensive 4-DoF robot arm, further enabling the dissemination of this benchmark.

A desktop robot is performing a surface-following task learned from simulation.

DIGIT

DigiTac

TacTip

High-resolution tactile sensing can provide accurate information about local contact in contact-rich robotic tasks. To improve the robustness of tactile robot control in unstructured environments, we propose and study a new concept: tactile saliency for robot touch

BibTeX


@ARTICLE{lin2023bitouch,

  author={Lin, Yijiong and Church, Alex and Yang, Max and Li, Haoran and Lloyd, John and Zhang, Dandan and Lepora, Nathan F.},

  journal={IEEE Robotics and Automation Letters}, 

  title={Bi-Touch: Bimanual Tactile Manipulation With Sim-to-Real Deep Reinforcement Learning}, 

  year={2023},

  volume={8},

  number={9},

  pages={5472-5479},

  doi={10.1109/LRA.2023.3295991},

  url={https://ieeexplore.ieee.org/abstract/document/10184426},

  }



@ARTICLE{lin2022tactilegym2,

  author={Lin, Yijiong and Lloyd, John and Church, Alex and Lepora, Nathan F.},

  journal={IEEE Robotics and Automation Letters}, 

  title={Tactile Gym 2.0: Sim-to-Real Deep Reinforcement Learning for Comparing Low-Cost High-Resolution Robot Touch}, 

  year={2022},

  volume={7},

  number={4},

  pages={10754-10761},

  doi={10.1109/LRA.2022.3195195},

  url={https://ieeexplore.ieee.org/abstract/document/9847020},

  }

@InProceedings{church2021optical,

     title={Tactile Sim-to-Real Policy Transfer via Real-to-Sim Image Translation},

     author={Church, Alex and Lloyd, John and Hadsell, Raia and Lepora, Nathan F.},

     booktitle={Proceedings of the 5th Conference on Robot Learning}

     year={2022},

     editor={Faust, Aleksandra and Hsu, David and Neumann, Gerhard},

     volume={164},

     series={Proceedings of Machine Learning Research},

    month={08--11 Nov},

    publisher={PMLR},

    pdf={https://proceedings.mlr.press/v164/church22a/church22a.pdf},

     url={https://proceedings.mlr.press/v164/church22a.html},

}