Cautious Adaptation for Reinforcement Learning in Safety-Critical Settings (CARL)

Abstract

Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous, imperiling the RL agent, other agents, and the environment. To overcome this difficulty, we propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments such as in a simulator, before it adapts to the target environment where failures carry heavy costs. We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk, which in turn enables relative safety through risk-averse, cautious adaptation. CARL first employs model-based RL to train a probabilistic model to capture uncertainty about transition dynamics and catastrophic states across varied source environments. Then, when exploring a new safety-critical environment with unknown dynamics, the CARL agent plans to avoid actions that could lead to catastrophic states. In experiments on car driving, cartpole balancing, half-cheetah locomotion, and robotic object manipulation, CARL successfully acquires cautious exploration behaviors, yielding higher rewards with fewer failures than strong RL adaptation baselines.

Rollout Videos (Regular Model-Based RL on the left, CARL on the right)

CartPole

CARL is more willing to pay an action penalty to keep the pole upright longer.

Half Cheetah

CARL is able to recover from near-catastrophic states with the disabled front foot folding.

Duckietown Driving

CARL is willing to sacrifice reward in order to make a wider turn when driving.

Presentation Slides

CARL - ICML

Bibtex

@misc{zhang2020cautious,

title={Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings},

author={Jesse Zhang and Brian Cheung and Chelsea Finn and Sergey Levine and Dinesh Jayaraman},

year={2020},

eprint={2008.06622},

archivePrefix={arXiv},

primaryClass={cs.LG}

}