Model-Based Meta-Policy Optimization (MB-MPO)

This site provides supplemental material to the paper "Model-Based Reinforcement Learning via Meta-Policy Optimization"

Abstract:

Model-based reinforcement learning approaches carry the promise of being data efficient. However, due to challenges in learning dynamics models that sufficiently match the real-world dynamics, they struggle to achieve the same asymptotic performance as model-free methods. We propose Model-Based Meta-Policy-Optimization (MB-MPO), an approach that foregoes the strong reliance on accurate learned dynamics models. Using an ensemble of learned dynamic models, MB-MPO meta-learns a policy that can quickly adapt to any model in the ensemble with one policy gradient step. This steers the meta-policy towards internalizing consistent dynamics predictions among the ensemble while shifting the burden of behaving optimally w.r.t. the model discrepancies towards the adaptation step. Our experiments show that MB-MPO is more robust to model imperfections than previous model-based approaches. Finally, we demonstrate that our approach is able to match the asymptotic performance of model-free methods while requiring significantly less experience.

The following materials are provided on this site:

  • Experiments: Experiment plots including links to the corresponding data and instructions how to reproduce the experiments
  • Videos: Short video clips of the learned policies
  • Code: The source code including the implementation of MB-MPO as well as scripts to reproduce all experiments
  • Data: Experimental data that has been used to generate the plots presented in the experiment section