Batch Policy Learning under Constraints

Hoang M. Le, Cameron Voloshin, Yisong Yue

California Institute of Technology

Abstract: When learning policies for real-world domains, two important questions arise: (i) how to efficiently use pre-collected off-policy, non-optimal behavior data; and (ii) how to mediate among different competing objectives and constraints. We thus study the problem of batch policy learning under multiple constraints, and offer a systematic solution. We first propose a flexible meta-algorithm that admits any batch reinforcement learning and online learning procedure as subroutines. We then present a specific algorithmic instantiation and provide performance guarantees for the main objective and all constraints. To certify constraint satisfaction, we propose a new and simple method for off-policy policy evaluation (OPE) and derive PAC-style bounds. Our algorithm achieves strong empirical results in different domains, including in a challenging problem of simulated car driving subject to multiple constraints such as lane keeping and smooth driving. We also show experimentally that our OPE method outperforms other popular OPE techniques on a standalone basis, especially in a high-dimensional setting.

Paper: Link to arXiv version

Code: Github Implementation

Short Video Overview of the Work

EXPERIMENTAL RESULT - Car Racing

Fast Driving subject to Smooth Driving and Lane Centering constraints

Example Videos of Data-Generating Policy

Example Videos of Returned Policy

(iteration 1)

(iteration 5)

OFF-POLICY POLICY EVALUATION

Off-policy Policy Evaluation (OPE) is an important component of the main algorithm. We proposed a simple model-free function approximation approach, Fitted Q Evaluation (FQE) as the main OPE component