MBB: Model-Based Baseline for Efficient Reinforcement Learning

https://arxiv.org/abs/2011.02073

Abstract:

Model-free reinforcement learning (RL) is capable of learning control policies for high-dimensional, complex robotic tasks, but tends to be data-inefficient. Model-based RL and optimal control have been proven to be much more data-efficient if an accurate model of the system and environment is known, but can be difficult to scale to expressive models for high-dimensional problems. In this paper, we propose a novel approach to alleviate data inefficiency of model-free RL by warm-starting the learning process using a lower-dimensional model-based solutions. Particularly, we propose a baseline function that is initialized via supervision from a low-dimensional value function. Such a lower-dimensional value function can be obtained by applying model-based techniques on a low-dimensional problem featuring a known approximate system model. Therefore, our approach exploits the model priors from a simplified problem space implicitly and avoids the direct use of high-dimensional, expressive models. We demonstrate our approach on two representative robotic learning tasks and observe significant improvement in performance and efficiency, and analyze our method empirically with a third task.