Elaborating on Learned Demonstrations with Temporal Logic Specifications

Craig Innes, Subramanian Ramamoorthy

University of Edinburgh

Robotics: Science and Systems 2020

Abstract: Most current methods for learning from demonstrations assume that those demonstrations alone are sufficient to learn the underlying task. This is often untrue, especially if extra safety specifications exist which were not present in the original demonstrations. In this paper, we allow an expert to elaborate on their original demonstration with additional specification information using linear temporal logic (ltl). Our system converts ltl specifications into a differentiable loss. This loss is then used to learn a dynamic movement primitive that satisfies the underlying specification, while remaining close to the original demonstration. Further, by leveraging adversarial training, the system learns to robustly satisfy the given ltl specification on unseen inputs, not just those seen in training. We show that our method is expressive enough to work across a variety of common movement specification patterns such as obstacle avoidance, patrolling, keeping steady, and speed limitation. In addition, we show how to modify a base demonstration with complex specifications by incrementally composing multiple simpler specifications. We also implement our system on a PR-2 robot to show how a demonstrator can start with an initial (sub-optimal) demonstration, then interactively improve task success by including additional specifications enforced with a differentiable ltl loss.

Paper: https://arxiv.org/abs/2002.00784

LTL Code: https://github.com/craigiedon/ltl_diff

Synthetic/Robot Demonstration Data: https://drive.google.com/file/d/18DzFm06DEqKdfokRLf3uBCmCn1yARIDd/view?usp=sharing