Safe Deep Policy Adaption
Wenli Xiao*, Tairan He*, John Dolan, Guanya Shi
*Equal Contributions
Robotics Institute, Carnegie Mellon University
International Conference on Robotics and Automation (ICRA), 2024
Abstract
A critical goal of autonomy and artificial intelligence is enabling autonomous robots to rapidly adapt in dynamic and uncertain environments. Classic adaptive control and safe control provide stability and safety guarantees but are limited to specific system classes. In contrast, policy adaptation based on reinforcement learning (RL) offers versatility and generalizability but presents safety and robustness challenges. We propose SafeDPA, a novel RL and control framework that simultaneously tackles the problems of policy adaptation and safe reinforcement learning. SafeDPA jointly learns adaptive policy and dynamics models in simulation, predicts environment configurations, and fine-tunes dynamics models with few-shot real-world data. A safety filter based on the Control Barrier Function (CBF) on top of the RL policy is introduced to ensure safety during real-world deployment. We provide theoretical safety guarantees of SafeDPA and show the robustness of SafeDPA against learning errors and extra perturbations. Comprehensive experiments on (1) classic control problems (Inverted Pendulum), (2) simulation benchmarks (Safety Gym), and (3) a real-world agile robotics platform (RC Car) demonstrate great superiority of SafeDPA in both safety and task performance, over state-of-the-art baselines. Particularly, SafeDPA demonstrates notable generalizability, achieving a 300% increase in safety rate compared to the baselines, under unseen disturbances in real-world experiments.
Proposed Method
Overview of the four phases of SafeDPA:
In Phase 1 (a), the environment encoder and dynamics model are jointly trained with an offline dataset collected by a random policy in simulation.
In Phase 1 (b), we make the parameters of the environments encoder frozen, and the base policy is trained in simulation using model-free RL.
In Phase 2, we train the adaption module to fit the environment encoder with the history of state and actions with on-policy data.
In Phase 3, we fine-tune our learned dynamics model trained in simulation with few-shot real-world data.
In Phase 4, we leverage the learned adaptive dynamics to construct a CBF-based safety filter on top of the adaptive RL policy to ensure safety during real-world deployment.
Real-world Results
Car
SafeDPA (Ours)
SafeDPA without Fine-tuning
Rapid Motor Adaptation - Penalty
Box
SafeDPA (Ours)
SafeDPA without Fine-tuning
Rapid Motor Adaptation - Penalty
Chair
SafeDPA (Ours)
SafeDPA without Fine-tuning
Rapid Motor Adaptation - Penalty
Large Chair
SafeDPA (Ours)
SafeDPA without Fine-tuning
Rapid Motor Adaptation - Penalty