TTR-Based Reward for Reinforcement Learning with Implicit Model Priors

Accepted by IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020 as oral presentation.

https://arxiv.org/abs/1903.09762

Abstract:

Model-free reinforcement learning (RL) is a powerful approach for learning control policies directly from high-dimensional state and observation. However, it tends to be data-inefficient, which is especially costly in robotic learning tasks. On the other hand, optimal control does not require data if the system model is known, but cannot scale to models with high-dimensional states and observations. To exploit benefits of both model-free RL and optimal control, we propose time-to-reach-based (TTR-based) reward shaping, an optimal control-inspired technique to alleviate data inefficiency while retaining advantages of model-free RL. This is achieved by summarizing key system model information using a TTR function to greatly speed up the RL process, as shown in our simulation results. The TTR function is defined as the minimum time required to move from any state to the goal under assumed system dynamics constraints. Since the TTR function is computationally intractable for systems with high-dimensional states, we compute it for approximate, lower-dimensional system models that still captures key dynamic behaviors. Our approach can be flexibly and easily incorporated into any model-free RL algorithm without altering the original algorithm structure, and is compatible with any other techniques that may facilitate the RL process. We evaluate our approach on two representative robotic learning tasks and three well-known model-free RL algorithms, and show significant improvements in data efficiency and performance.