Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines

Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons and high-dimensional action spaces. To mitigate this issue, we derive a bias-free, action-dependent baseline for variance reduction, which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results. In particular, we theoretically analyze the variance reduction improvement of the action-dependent baseline over the optimal state-only baseline. Our experimental results indicate that action-dependent baselines allow for faster learning on standard benchmark tasks, as well as on other problems like high-dimensional dexterous manipulation, partially observable peg insertion, and multi-agent communication.