Policy gradient methods are a widely used class of model-free reinforcement learning algorithms where a state-dependent baseline is used to reduce the gradient estimator variance. Several recent papers extend the baseline to depend on both the state and action, and suggest that this significantly reduces variance and improves sample efficiency without introducing bias into the gradient estimates. To better understand this development, we decompose the variance of the policy gradient estimator and numerically show that learned state-action-dependent baselines do not in fact reduce variance over a state-dependent baseline in commonly tested benchmark domains. We confirm this unexpected result by reviewing the open-source code accompanying these prior papers, and show that subtle implementation decisions cause deviations from the methods presented in the papers and explain the source of the previously observed empirical gains. Furthermore, the variance decomposition highlights areas for improvement, which we demonstrate by illustrating a simple change to the typical value function parameterization that can significantly improve performance.
We construct a Linear-quadratic-Gaussian (LQG) system for a simple 2D point mass environment, where the agent's goal is to reach the center (0, 0) with a finite horizon of 100. As training progresses, we visualize both the agent's trajectories (Figure 1a), and the variance of the policy gradient estimator with each of the individual terms decomposed (Figure 1b). For more details, see Section 3.1 of the paper.
Figure 1a
Figure 1b
The code to reproduce all results is hosted here: https://github.com/brain-research/mirage-rl.
@article{tucker2018mirage,
author = "Tucker, George and Bhupatiraju, Surya and Gu, Shixiang (Shane) and Turner, Richard E., and Ghahramani, Zoubin and Levine, Sergey",
title = "The Mirage of Action-Dependent Baselines in Reinforcement Learning",
year = "2018"
}