IEEE ICRA 2022 Workshop

Reinforcement Learning for Contact-Rich Manipulation

May 27, Philadelphia (PA), USA

The workshop will be held in person with support for remote presentation and participation.

Important Details

  • Submission deadline: April 22, 2022 (AoE) April 30, 2022 (AoE)

  • Paper (extended abstract) format: IEEE RAS format

  • Recommended length: maximum 4 pages excluding references

  • Submission link: OpenReview

  • Notification of acceptance: April 30, 2022 May 6, 2022

  • Workshop date: May 27, 2022

  • Location: Room 118 C

  • Zoom link

Contact-rich manipulation tasks are an important and challenging category of tasks in robotic automation for a variety of applications. Physical contact is prevalent in object insertion and assembly. For tasks involving complex contact dynamics and friction, it is difficult to model related physical effects, and hence, traditional control methods often result in inaccurate or brittle controllers. Lately, reinforcement learning (RL) has been demonstrated to be a promising approach to learning robot control policies in such environments. However, RL still faces many challenges in contact-rich environments, such as in sample efficiency, sim2real transfer, safety and stability, and reward shaping.

The objective of the workshop is to focus on addressing the challenges specifically involved in contact-rich environments and gather researchers in this area to share ideas and state-of-the-art solutions.

Call for Contributions

We invite submissions from a broad range of topics, including but not limited to:

  • Imitation learning

  • Learning from CAD

  • Multimodal representations

  • Reward learning

  • Safety and stability guarantees

  • Sample efficiency

  • Sim2real transfer

  • Tactile representation

  • Task sequence learning

Accepted contributions will be presented as lightning talks and posters. The two best contributions will be presented as spotlight talks.

Invited Speakers

Jeannette Bohg

Stanford University

Kensuke Harada

Osaka University

Ludovic Righetti

New York University

Sergey Levine

UC Berkeley

Shahbaz Abdul Khader

KTH Royal Institute of Technology

Accepted Papers

Pathologies and Challenges of Using Differentiable Simulators in Policy Optimization for Contact-Rich Manipulation. H.J. Terry Suh, Max Simchowitz, Kaiqing Zhang, Tao Pang, Russ Tedrake. (oral paper)

SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional, and Incremental Robot Learning. Jun Lv, Qiaojun Yu, Lin Shao, Wenhai Liu, Wenqiang Xu, Cewu Lu. (oral paper)

RRL: Resnet as representation for Reinforcement Learning. Rutav Shah, Vikash Kumar. (oral paper)

Self-Supervised Learning of Multi-Object Keypoints for Robotic Manipulation. Jan Ole von Hartz, Eugenio Chisari, Tim Welschehold, Abhinav Valada.

Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity. Wenxuan Zhou, David Held.

Efficient Object Manipulation Planning with Monte Carlo Tree Search. Huaijiang Zhu, Ludovic Righetti.

Tactile Sensing and its Role in Learning and Deploying Robotic Grasping Controllers. Alexander Koenig, Zixi Liu, Lucas Janson, Robert Howe.

Learning active tactile perception through belief-space control. Jean-François Tremblay, Johanna Hansen, David Meger, Francois Robert Hogan, Gregory Dudek.

Learning Slip with a Patterned Capacitive Tactile Sensor. Yuri Gloumakov, Tae Myung Huh.

Learning Goal-Oriented Non-Prehensile Pushing in Cluttered Scenes. Nils Dengler, David Großklaus, Maren Bennewitz. (remote presentation) (poster zoom link)

Integrating Force-based Manipulation Primitives with Deep Learning-based Visual Servoing for Robotic Assembly. Yee Sien Lee, Nghia Vuong, Nicholas Adrian, Quang Cuong Pham. (remote presentation) (poster zoom link)

Learning Dense Reward with Temporal Variant Self-Supervision. Yuning Wu, Jieliang Luo, Hui Li. (remote presentation) (poster zoom link)

Synthesizing and Simulating Volumetric Meshes from Vision-based Tactile Imprints. Xinghao Zhu, Siddarth Jain, Masayoshi Tomizuka, Jeroen Vanbaar. (remote presentation) (poster zoom link)


  • 09:00 - 09:15 Welcome and introduction

  • 09:15 - 09:45 Invited talk: Jeannette Bohg - Fusing Vision and Touch for Contact-Rich Manipulation

  • 09:45 - 10:15 Invited talk: Kensuke Harada - Application of Reinforcement Learning for Force Controlled Tasks (remote)

  • 10:15 - 10:40 Coffee break

  • 10:40 - 11:10 Invited talk: Aude Billard - TBA

  • 11:10 - 11:20 Spotlight talk: Pathologies and Challenges of Using Differentiable Simulators in Policy Optimization for Contact-Rich Manipulation

  • 11:20 - 11:50 Invited talk: Shabhaz Khader - Control Stability in Learning Contact-Rich Manipulation Skills (remote)

  • 11:50 - 12:20 Invited talk: Ludovic Righetti - A few ideas to improve efficiency and robustness of complex contact interactions

  • 12:20 - 13:45 Lunch break

  • 13:45 - 14:15 Invited talk: Sergey Levine - Data-Driven Robotic Reinforcement Learning

  • 14:15 - 14:25 Spotlight talk: SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional, and Incremental Robot Learning

  • 14:25 - 14:35 Spotlight talk: RRL: Resnet as representation for Reinforcement Learning

  • 14:35 - 15:05 Lightning talks

  • 15:05 - 16:00 Coffee break & Posters (in person and remote)

  • 16:00 - 16:15 Final remarks

Program Committee

Christian Pek


Cristian C. Beltran-Hernandez

Osaka University

Ioanna Mitsioni


Krishnan Srinivasan

Stanford University

Nghia Vuong

Nanyang Technological University

Shaoxiong Wang


Zheng Wu

UC Berkeley


KTH Royal Institute of Technology

Hui Li

Autodesk Research

Stanford University

Quang-Cuong Pham

Nanyang Technological University

Facebook AI Research