M2CURL: Sample-Efficient Multimodal Reinforcement Learning via Self-Supervised Representation Learning for Robotic Manipulation

Fotios Lygerakis

Vedant Dave

Elmar Rueckert

Abstract

One of the most critical aspects of multimodal Reinforcement Learning (RL) is the effective integration of different observation modalities. Having robust and accurate representations derived from these modalities is key to enhancing the robustness and sample efficiency of RL algorithms. However, learning representations in RL settings for visuotactile data poses significant challenges, particularly due to the high dimensionality of the data and the complexity involved in correlating visual and tactile inputs with the dynamic environment and task objectives. 

To address these challenges, we propose Multimodal Contrastive Unsupervised Reinforcement Learning (M2CURL). Our approach employs a novel multimodal self-supervised learning technique that learns efficient representations and contributes to faster convergence of RL algorithms. Our method is agnostic to the RL algorithm, thus enabling its integration with any available RL algorithm. We evaluate M2CURL on the Tactile Gym 2 simulator and we show that it significantly enhances the learning efficiency in different manipulation tasks. This is evidenced by faster convergence rates and higher cumulative rewards per episode, compared to standard RL algorithms without our representation learning approach.

The M2CURL Architecture: First, a batch of visuotactile observations are sampled from the replay buffer. Then, two random augmentations are applied for the query (online) and key (momentum) encoders, and their representation is computed. The query and key representations are used to compute the inter and intra-modality codes using the respective heads, from which the different inter and intra-modality losses are computed. Finally, the weighted sum of the sub-losses is passed to the RL algorithm as a combined multimodal contrastive loss. Momentum encoders are denoted with *.

Contact us: 

For any question you can contact us at: fotios.lygerakis@unileoben.ac.at