PEX: Policy-Expansion for Bridging Offline-to-Online RL
Haichao Zhang Wei Xu Haonan Yu
Horizon Robotics
ICLR 2023
Abstract
Pre-training with offline data and online fine-tuning using reinforcement learning is a promising strategy for learning control policies by leveraging the best of both worlds in terms of sample efficiency and performance. One natural approach is to initialize the policy for online learning with the one trained offline. In this work, we introduce a policy expansion scheme for this task. After learning the offline policy, we use it as one candidate policy in a policy set. We then expand the policy set with another policy which will be responsible for further learning. The two policies will be composed in an adaptive manner for interacting with the environment. With this approach, the policy previously learned offline is fully retained during online learning, thus mitigating the potential issues such as destroying the useful behaviors of the offline policy in the initial stage of online learning while allowing the offline policy participate in the exploration naturally in an adaptive manner. Moreover, new useful behaviors can potentially be captured by the newly added policy through learning. Experiments are conducted on a number of tasks and the results demonstrate the effectiveness of the proposed approach.
Illustration of Different Training Schemes. Offline training and online RL have been developed within their own training stages. Direct Offline-Online learning approach continues the online training stage after the offline stage is finished, updating the same policy network. The proposed Policy Expansion approach bridges offline and online training by retaining the policy after offline learning (πβ), and expand the policy set with another learnable policy (πθ) for capturing further performance improvements. The two policies both participate in interactions with environment and learning in an adaptive fashion.
Visualization of PEX Policy
Related Publications and Resources
Policy Expansion for Bridging Offline-to-Online Reinforcement Learning
Haichao Zhang, Wei Xu and Haonan Yu
International Conference on Learning Representations (ICLR), 2023
@inproceedings{PEX,
title={Policy Expansion for Bridging Offline-to-Online Reinforcement Learning},
author={Haichao Zhang and Wei Xu and Haonan Yu},
booktitle={International Conference on Learning Representations},
year={2023}
}