Imitation Learning and its Challenges in Robotics

NeurIPS workshop | Montreal, Canada | Dec 7, 2018

Description

Many animals including humans have the ability to acquire skills, knowledge, and social cues from a very young age. This ability to imitate by learning from demonstrations has inspired research across many disciplines like anthropology, neuroscience, psychology, and artificial intelligence. In AI, imitation learning (IL) serves as an essential tool for learning skills that are difficult to program by hand. The applicability of IL to robotics in particular, is useful when learning by trial and error (reinforcement learning) can be hazardous in the real world. Despite the many recent breakthroughs in IL, in the context of robotics there are several challenges to be addressed if robots are to operate freely and interact with humans in the real world.


Some important challenges include: 1) achieving good generalization and sample efficiency when the user can only provide a limited number of demonstrations with little to no feedback; 2) learning safe behaviors in human environments that require the least user intervention in terms of safety overrides without being overly conservative; and 3) leveraging data from multiple sources, including non-human sources, since limitations in hardware interfaces can often lead to poor quality demonstrations.


In this workshop, we aim to bring together researchers and experts in robotics, imitation and reinforcement learning, deep learning, and human robot interaction to

  • Formalize the representations and primary challenges in IL as they pertain to robotics
  • Delineate the key strengths and limitations of existing approaches with respect to these challenges
  • Establish common baselines, metrics, and benchmarks, and identify open questions

Invited Speakers

UT Austin

Georgia Tech

Oxford University

Caltech

Georgia Tech / NVIDIA

Berkley / Waymo

Stanford

Important Dates

Oct 19

Oct 29

Nov 16

Dec 7

Submission deadline (AoE time)

Notification of acceptance

Camera ready deadline

Workshop

Call for Abstracts

We solicit up to 4 pages extended abstracts (excluding references) conforming to the NeurIPS style. Submissions can include archived or previously accepted work (please make a note of this in the submission). Reviewing will be single blind.

Submission link: https://easychair.org/conferences/?conf=nips18ilr

Topics of interest include, but are not limited to:

  • Sample efficiency in imitation learning
  • Learning from high dimensional demonstrations
  • Learning from observations
  • Learning with minimal demonstrator effort
  • Few shot imitation learning
  • Risk aware imitation learning
  • Learning to gain user trust
  • Learning from multi modal demonstrations
  • Learning with imperfect demonstrations

All accepted contributions will be presented in interactive poster sessions. A subset of accepted contributions will be featured in the workshop as spotlight presentations.

Travel Awards:

With the generous support of our sponsors, we are excited to offer a few travel awards intended to partly offset cost of attendance (registration + most of travel). Only presenting students/post-docs of accepted contributions will be eligible to receive these awards. Applications will be accepted alongside submissions.

Thank you to those who applied, we will announce the award recipients at the workshop.

Schedule

08:55 - 09:00 | Organizers | Introduction

09:00 - 09:30 | Peter Stone |

09:30 - 10:00 | Sonia Chernova |

10:00 - 10:15 | Contributed Spotlights | #1 to #5

10:15 - 11:00 | Poster Session I and Coffee Break | #1 to #20

11:00 - 11:30 | Ingmar Posner |

11:30 - 12:00 | Dorsa Sadigh |

12:00 - 02:00 | Lunch Break

02:00 - 02:30 | Byron Boots |

02:30 - 02:45 | Dileep George | Industry Spotlight |

02:45 - 03:30 | Poster Session II and Coffee Break | #1 to #20

03:30 - 04:00 | Yisong Yue |

04:00 - 04:30 | Anca Dragan |

04:30 - 05:30 | Panel Discussion |

Contributed Papers

1. Sam Zeng, Vaibhav Viswanathan, Cherie Ho and Sebastian Scherer. Learning Reactive Flight Control Policies: From LIDAR Measurements to Actions

2. Muhammad Asif Rana, Daphne Chen, Reza Ahmadzadeh, Jake Williams, Vivian Chu and Sonia Chernova. A Large-Scale Benchmark Study Investigating the Impact of User Experience, Task Complexity, and Start Configuration on Robot Skill Learning

3. Dequan Wang, Coline Devin, Qi-Zhi Cai, Philipp Krähenbühl and Trevor Darrell. Learning to Drive with Monocular Plan View

4. Pim de Haan, Dinesh Jayaraman and Sergey Levine. Causal Confusion in Imitation Learning

5. Wen Sun, Hanzhang Hu, Byron Boots and Drew Bagnell. Provably Efficient Imitation Learning from Observation Alone

6. Laurent George, Thibault Buhet, Emilie Wirbel, Gaetan Le-Gall and Xavier Perrotton. Imitation Learning for End to End Vehicle Longitudinal Control with Forward Camera

7. Ibrahim Sobh and Nevin Darwish. End-to-End Framework for Fast Learning Asynchronous Agents

8. Konrad Zolna, Negar Rostamzadeh, Yoshua Bengio, Sungjin Ahn and Pedro O. Pinheiro. Reinforced Imitation Learning from Observations

9. Nicholas Rhinehart, Rowan McAllister and Sergey Levine. Deep Imitative Models for Flexible Inference, Planning, and Control

10. Mohit Sharma, Arjun Sharma, Nicholas Rhinehart and Kris Kitani. Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information

11. Bin Wang, Qiyuan Zhang, Yuzheng Zhuang, Jun Luo, Hongbo Zhang and Wulong Liu. Data-efficient Imitation of Driving Behavior with Generative Adversarial Networks

12. Alex Bewley, Jessica Rigley, Yuxuan Liu, Jeffrey Hawke, Richard Shen, Vinh-Dieu Lam and Alex Kendall. Zero-Shot Driving Imitation via Image Translation

13. Sanjay Thakur, Herke Van Hoof, Kushal Arora, Doina Precup and David Meger. Sample Efficient Learning From Demonstrations on Multiple Tasks using Bayesian Neural Networks

14. Lionel Blondé and Alexandros Kalousis. Sample-Efficient Imitation Learning via Generative Adversarial Nets

15. Aadil Hayat, Sarthak Mittal and Vinay Namboodiri. Multi-Task Learning Using Conditional Generative Adversarial Imitation Learning

16. Ozgur S. Oguz, Ben Pfirrmann, Mingpan Guo and Dirk Wollherr. Learning Hand Movement Interaction Control Using RNNs: From HHI to HRI

17. Michael Kelly, Chelsea Sidrane, Katherine Driggs-Campbell and Mykel J. Kochenderfer. Safe Interactive Imitation Learning from Humans

18. Sujoy Paul and Jeroen Vanbaar. Trajectory-based Learning for Ball-in-Maze Games

19. Michał Garmulewicz, Henryk Michalewski and Piotr Miłoś. Expert-augmented actor-critic for ViZDoom and Montezuma’s Revenge

20. Hongyu Ren, Jiaming Song and Stefano Ermon. Stabilizing Reinforcement Learning via Mutual Imitation

Organizers

Georgia Tech

University of Washington

University of Washington

Sponsors