Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. The core idea behind the proposed scheme is to incorporate all available information, including actions, in specific ways to preserve the unbiased nature of policy gradients. We show that this core idea can be applied more broadly to settings such as POMDPs and multi-agent tasks. We demonstrate and quantify the benefit of our proposed schemes through both theoretical analysis as well as numerical results. Our experiments indicate that policy gradient methods, using our proposed baselines for variance reduction, allow for faster learning and scales gracefully to high-dimensional control problems.
@article{Wu2018Variance,
title={Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines},
author={Cathy Wu and Aravind Rajeswaran and Yan Duan and Vikash Kumar
and Alexandre M. Bayen and Sham M. Kakade and Igor Mordatch and Pieter Abbeel},
journal={CoRR},
year={2018},
volume={abs/1803.07246}
}