Mutual Alignment

Transfer Learning

Training robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be reduced by training policies in simulation, such policies can perform sub-optimally on the real platform given imperfect calibration of model dynamics. We present an approach -- supplemental to fine tuning on the real robot -- to further benefit from parallel access to a simulator during training and reduce sample requirements on the real robot. The developed approach harnesses auxiliary rewards to guide the exploration for the real world agent based on the proficiency of the agent in simulation and vice versa. In this context, we demonstrate empirically that the reciprocal alignment for both agents provides further benefit as the agent in simulation can adjust to optimize its behaviour for states commonly visited by the real-world agent.


Method Overview

Example Videos

Sparse Rewards - Cartpole Swingup

The videos show the performance of all approaches in mid training as for comparably simple tasks the final performance can be comparable between all baselines.

Independent

cs_independent.mp4

MATLu

cs_MATLu.mp4

Direct Transfer

cs_direct.mp4

MATL

cs_MATL.mp4

Fine Tuning

cs_fine_tuning.mp4

MATLf

cs_MATLf.mp4

Uniformative Rewards - Hopper2D

We have created a scenario with conflicting rewards to evaluate the capability of MATL to produce auxiliary forward guiding rewards that enable the agent to overcome the environment reward for stability. Moving forward is less stable and therefore the agent learns more careful, often ankle-based forward motion patterns with MATL.

Independent

h_independent.mp4

MATLu

h_MATLu.mp4

Direct Transfer

h_direct.mp4

MATL

h_MATL.mp4

Fine Tuning

h_fine_tuning.mp4

MATLf

h_MATLf.mp4

Wasserstein GAN Adaptation (MuJoCo to DART)

MATL WGAN

Experiment and Algorithm Parameters

MATL

Additional Reacher Tasks:

  • These tasks are included in the arxiv preprint version of our work.