Neural Network Dynamics for

Model-Based Deep Reinforcement Learning

with Model-Free Fine-Tuning

Anusha Nagabandi, Gregory Kahn, Ronald S. Fearing, Sergey Levine

University of California, Berkeley

Model-free deep reinforcement learning methods have successfully learned complex behavioral strategies for a wide range of tasks, but typically require many samples to achieve good performance. Model-based algorithms, in principle, can provide for much more efficient learning, but have proven difficult to extend to expressive, high-capacity models such as deep neural networks. In this work, we demonstrate that medium-sized neural network models can in fact be combined with model predictive control to achieve excellent sample complexity in a model-based reinforcement learning algorithm, producing stable and plausible gaits to accomplish various complex locomotion tasks. We also propose using deep neural network dynamics models to initialize a model-free learner, in order to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. We perform this pre-initialization by using rollouts from the trained model-based controller as supervision to pre-train a policy, and then fine-tune the policy using a model-free method. We empirically demonstrate that this resulting hybrid algorithm can drastically accelerate model-free learning and outperform purely model-free learners on several MuJoCo locomotion benchmark tasks, achieving sample efficiency gains over a purely model-free learner of 330x on swimmer, 26x on hopper, 4x on half-cheetah, and 3x on ant.

Overview Video:

Pure model-based results:

ANT (straight, left, right, u-turn):

SWIMMER (straight, left, right):

CHEETAH (forward, backward, forward-backward):

Hybrid model-based plus model-free results: