NoRML: No-Reward Meta Reinforcement Learning

Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Jie Tan, Chelsea Finn

Robotics at Google

To Be Presented on AAMAS 2019 (arXiv)

Source-code

Abstract

Efficiently adapting to new environments and changes in dynamics is critical for agents to successfully operate in the real world. Reinforcement learning (RL) based approaches typically rely on external reward feedback for adaptation. However, in many scenarios this reward signal might not be readily available for the target task, or the difference between the environments can be implicit and only observable from the dynamics. To this end, we introduce a method that allows for self-adaptation of learned policies: No-Reward Meta Learning (NoRML). NoRML extends Model Agnostic Meta Learning (MAML) for RL and uses observable dynamics of the environment instead of an explicit reward function in MAML’s finetune step. Our method has a more expressive update step than MAML, while maintaining MAML’s gradient based foundation. Additionally, in order to allow more targeted exploration, we implement an extension to MAML that effectively disconnects the meta-policy parameters from the fine-tuned policies’ parameters. We first study our method on a number of synthetic control problems and then validate our method on common benchmark environments, showing that NoRML outperforms MAML when the dynamics change between tasks.

HalfCheetah Experiments

To simulate wiring/transmission error, we allow the outputs of the two hip joints for HalfCheetah to be swapped.

Domain Randomization (DR)

Although DR achieves a similar reward as NoRML, the behavior is a lot more unstable, as a single policy is required to solve all tasks.

cheetah_dr.mp4

NoRML (ours)

With NoRML, the agent adapts effectively to dynamics changes and produces a stable fine-tuned gait.

NoRML_halfcheetah.mp4