Safety Augmented Value Estimation from Demonstrations (SAVED):

Safe Deep Model-Based RL for Sparse Cost Robotic Tasks

Brijen Thananjeyan*, Ashwin Balakrishna*, Ugo Rosolia, Felix Li, Rowan McAllister,

Joseph E. Gonzalez, Sergey Levine, Francesco Borrelli, Ken Goldberg

*Equal Contribution


Abstract: Reinforcement learning (RL) for robotics is challenging due to the difficulty in hand-engineering a dense cost function, which can lead to unintended behavior, and dynamical uncertainty, which makes exploration and constraint satisfaction challenging. We address these issues with a new model-based reinforcement learning algorithm, Safety Augmented Value Estimation from Demonstrations (SAVED), which uses supervision that only identifies task completion and a modest set of suboptimal demonstrations to constrain exploration and learn efficiently while handling complex constraints. We then compare SAVED with 3 state-of-the-art model-based and model-free RL algorithms on 6 standard simulation benchmarks involving navigation and manipulation and a physical knot-tying task on the da Vinci surgical robot. Results suggest that SAVED outperforms prior methods in terms of success rate, constraint satisfaction, and sample efficiency, making it feasible to safely learn a control policy directly on a real robot in less than an hour. For tasks on the robot, baselines succeed less than 5% of the time while SAVED has a success rate of over 75% in the first 50 training iterations. Code and supplementary material is available at

Safety Augmented Value Estimation from Demonstrations (SAVED)

  • SAVED uses a density model to represent areas in the state space with high confidence in task completion

  • Constraints are enforced by sampling over learned dynamics to estimate the probability of collisions

Simulated Experiments

  • Navigation task with linear dynamics, gaussian process noise, and non-convex state-space constraints

  • SAVED learns significantly faster than all RL baselines with sparse costs across all tasks

  • SAVED also has higher constraint satisfaction and task success rates than all other RL baselines

Robot Experiments

  • Can successfully learn a knot tying task on the daVinci surgical robot: arm 1 wraps the thread around arm 2, which grasps the other end of the thread and tightens the knot

  • After just 15 iterations, the agent completes the task relatively consistently with only a few failures, and converges to a iteration cost of 22, faster than demos, which have an average iteration cost of 34

  • SAVED quickly learns to speed up with only occasional constraint violations and stabilizes in the goal set