RSS 2018 Workshop

Perspectives on Robot Learning: Causality and Imitation

Gates-HillMan 4401, June 30, 2018

Abstract

Sequential Decision Making and Reinforcement Learning in complex environments with sparse rewards and stochastic dynamics is a long standing challenge. Despite the recent success of RL in games, there are challenges in applying these methods to robotics in the face of safety concerns and cost of environmental interaction. At the same time absence of an informative reward function can render this family of iterative learning methods impractical.

In contrast, Imitation learning algorithms guide an agent towards the correct behavior via leveraging a supervisor. Imitation may imply "do the same thing"; but ideally we seek to emphasize semantic similarity rather than literal behavior cloning. Often such generalization may require exploration that goes beyond trajectory replay. For instance, a robot may replicate a human trajectory to open the door but might fail to opening a window or a fridge. Such generalization needs a representation of the task that facilitates causal exploration. We need to build joint action-perception representations that encode perceivable effects, and select actions in terms of operations that determine intended future percept from the given current percept.

Recent research has reiterated the efficiency of imitation learning based methods over RL for learning in physical domains as well as addressing problems of limited non-i.i.d. data in Imitation. At the same time research in causality has resulted in promising abstractions for robotics. There is an exciting opportunity in combining these ideas to achieve generalization -- whereby imitation guides task representations, and causality enables exploration for generalization.

This workshop will serve as a platform to discuss the impact and merit of algorithmic techniques in Imitation Learning and Causal Inference, and their applications in robotics. We invite submissions advancing the theory, abstractions and systems in both imitation and causality for robotics.

Schedule

8:45 - 9:00: Introductory Remarks (Animesh Garg)

9:00 - 9:30: Jeannette Bohg: Causality - What Gives?

9:30 - 10:00: Invited talk: Sergey Levine

10:00 - 10:30: Poster spotlights

Demonstration and Imitation of Novel Behaviors under Safety Aware Shared Control, A. Broad, T. Murphey, B. Argall Bidirectional Cause-Effect Reasoning as the Basis of Imitation Learning. G. Katz, G. Davis, R. Gentili, J. Reggia Learning from Demonstration of trajectory preferences through causal modeling and inference D. Angelov, S. Ramamoorthy Learning Hierarchical Policies from Unsegmented Demonstrations using Causal Information M. Sharma, A. Sharma, N. Rhinehart, K. Kitani Robot Learning with Invariant Hidden Semi-Markov Models A. Tanwani, J. Lee, M. Laskey, S. Krishnan, R. Fox, K. Goldberg

10:30 - 11:00 Coffee Break + Posters

11:00 - 12:00: Keynote: Peter Spirtes: An Overview of Automated Aids for Causal Inference

12:00 - 1:30: Lunch + Posters

1:30 - 2:00: Poster spotlights

Parsing by Imitation by Parsing Imitation T. Shankar, N. Rhinehart, K. Muelling, K. M. Kitani Learning to See Physics via Visual De-animation. J. Wu Actional-Perceptual Causality: Concepts and Inductive Learning for AI and Robotics, S.B. Ho, M. Edmonds, S.C. Zhu Stability Analysis of On-Policy Imitation Learning Algorithms Using Dynamic Regret. J. Lee, M. Laskey, A. Tanwani K. Goldberg Task-specific Motion Planning using User-Guidance, Imitation, and Self-Evaluation. R. Laha, N. Chakraborty

2:00 - 2:30: Invited talk: Marc Toussaint: Physical Reasoning & Robot Manipulation

2:30 - 3:00: Coffee break + Posters

3:00 - 3:45: Keynote: Elias Bareinboim: An Introduction to Causal Reinforcement Learning

3:45 - 4:15: Invited talk: Mark Edmonds: Causal Imitation: Integrating Observations and Interventions

4:15 - 4:45: Ruslan Salakhutdinov: Structured Memory for Deep Reinforcement Learning

4:45 - 5:30: Panel Discussion + Closing Remarks

Call for Abstracts

Important Dates:

  • Submission Deadline: June 7 Midnight PST June 3
  • Decisions: June 9
  • Camera Ready: June 11 (Regular Registration Deadline)
  • Workshop: June 30

Submission Info

We solicit 2-4 page extended abstracts conforming to the official RSS style guidelines. A paper template is available in LaTeX and Word.

Submissions can include: late-breaking results, under review material, archived, or previously accepted work (please make a note of this in the submission).

Please note the accepted contributions will be presented in an interactive poster format (non-archival). A small set of these will be featured as spotlight talks. The accepted contributions and posters will be posted on the workshop website upon author approval.

Submission page: https://easychair.org/cfp/rss18-cir

Topics of Interest

Broadly defined list of topics include, but are not limited to, the following

  • Sample Efficiency in Imitation Learning
  • Hybrid Reinforcement and Imitation learning
  • Reinforcement learning with links to causal inference and counterfactual reasoning
  • Interfaces of agent-based systems and causal inference
  • Structure Representations in Robotics: Perception, Planning, and Control
  • Causal Inference
  • Discriminative learning vs. generative modeling in counterfactual settings
  • Interactive experimental control vs. counterfactual estimation
  • Uncertainty representations in Deep Learning for robotics
  • Learning Models and System Identification
  • Combining Model-free and Model-based Methods
  • Efficient and safe exploration in model-based methods
  • Generative models of dynamics

Organizers

Animesh Garg

Stanford AI Lab

Michael Laskey

UC Berkeley, BAIR

Yuke Zhu

Stanford AI Lab

Jiajun Wu

MIT CSAIL

Stefano Ermon

Stanford AI Lab