Tactile Gym 2.0: Sim-to-real Deep Reinforcement Learning for Comparing Low-cost High-Resolution Robot Touch


Yijiong Lin, John Lloyd, Alex Church, Nathan F. Lepora  

Department of  Engineering  Mathematics and  Bristol  Robotics  Laboratory,  University of  Bristol,  Bristol,  U.K.


Email: {yijiong.lin, jl15313, ac14293,  n.lepora}@bristol.ac.uk


   Arxiv 🔗    •    Code (Legacy)🔗  •   Code (New)🔗

Abstract

High-resolution optical tactile sensors are increasingly used in robotic learning environments due to their ability to capture large amounts of data directly relating to agent-environment interaction. We extend the Tactile Gym simulator to include three new optical tactile sensors (TacTip, DIGIT and DigiTac) of the two most popular types, Gelsight-style (image-shading based) and TacTip-style (marker based). We demonstrate that a single sim-to-real approach can be used with these three different sensors to achieve strong real-world performance despite the significant differences between real tactile images. Additionally, we lower the barrier of entry to the proposed tasks by adapting them to an inexpensive 4-DoF robot arm, further enabling the dissemination of this benchmark. 

  Arxiv 🔗    •    Code (Legacy)🔗  •   Code (New)🔗

A desktop robot is performing a surface-following task learned from simulation.

DIGIT

DigiTac

TacTip

DigiTac: Tactip-Style Sensor

DIGIT: GelSight-Style Sensor

Real-to-Sim Tactile Image Transfer


A single sim-to-real method can be used for two widely-used yet significantly different tactile sensors: GelSight style (image-shading) and TacTip style (marker-based). 


Real tactile images (left) are preprocessed and translated through pix2pix trained Unet generator into simulated tactile images (center). Overlay (right) of the two images to highlight similarities.



Tactile RL Environments (Tactile Gym 2.0)

We have extended the Tactile Gym with three tactile sensors (DigiTac, DIGIT, TacTip) of two widely-used yet fundamentally different types: TacTip and GelSight style.  To make it  more accessible  for the  research  community, we have also integrated a low-cost industrial-level desktop robot MG400 in three environments as shown below (with tactile images shown in the right).

Edge-following (DIGIT)

Object-pushing (DigiTac)

Surface-following (TacTip)

"Sim" Gallery

DigiTac, DIGIT, TacTip interacts with orange, purple, blue objects respectively.  All sensors can be mounted on either UR5 (6-DoF) or MG400 (4-DoF).

Real-world Surface-following Task

DigiTac

DIGIT

Real-world Object-pushing Task

DigiTac

DIGIT

Real-world Edge-following Task

DigiTac

DIGIT

More Sim-to-Real Demonstrations

Sim-to-Real Experiments

Note that all the control policies are only trained in the simulation and then are transferred to the real world without any further fine-tunning. The quantitative experimental results are reported in our paper.

See also: Bi-Touch

We introduce a dual-arm tactile robotic system (Bi-Touch) based on the Tactile Gym 2.0 setup that integrates two affordable industrial-level robot arms with low-cost high-resolution tactile sensors. A suite of bimanual manipulation tasks tailored toward tactile feedback is developed and open-sourced. 

See also: DigiTac

TacTip sensing surface integrated with DIGIT base: A) 3D-printed TacTip, B) acrylic window, C) lighting PCB, D) plastic housing, E) camera PCB, F) back housing.

BibTeX


@ARTICLE{lin2023bitouch,

  author={Lin, Yijiong and Church, Alex and Yang, Max and Li, Haoran and Lloyd, John and Zhang, Dandan and Lepora, Nathan F.},

  journal={IEEE Robotics and Automation Letters}, 

  title={Bi-Touch: Bimanual Tactile Manipulation With Sim-to-Real Deep Reinforcement Learning}, 

  year={2023},

  volume={8},

  number={9},

  pages={5472-5479},

  doi={10.1109/LRA.2023.3295991},

  url={https://ieeexplore.ieee.org/abstract/document/10184426},

  }



@ARTICLE{lin2022tactilegym2,

  author={Lin, Yijiong and Lloyd, John and Church, Alex and Lepora, Nathan F.},

  journal={IEEE Robotics and Automation Letters}, 

  title={Tactile Gym 2.0: Sim-to-Real Deep Reinforcement Learning for Comparing Low-Cost High-Resolution Robot Touch}, 

  year={2022},

  volume={7},

  number={4},

  pages={10754-10761},

  doi={10.1109/LRA.2022.3195195},

  url={https://ieeexplore.ieee.org/abstract/document/9847020},

  }

@InProceedings{church2021optical,

     title={Tactile Sim-to-Real Policy Transfer via Real-to-Sim Image Translation},

     author={Church, Alex and Lloyd, John and Hadsell, Raia and Lepora, Nathan F.},

     booktitle={Proceedings of the 5th Conference on Robot Learning}

     year={2022},

     editor={Faust, Aleksandra and Hsu, David and Neumann, Gerhard},

     volume={164},

     series={Proceedings of Machine Learning Research},

    month={08--11 Nov},

    publisher={PMLR},

    pdf={https://proceedings.mlr.press/v164/church22a/church22a.pdf},

     url={https://proceedings.mlr.press/v164/church22a.html},

}