As robots and other intelligent agents increasingly address complex problems in unstructured settings, programming their behavior is becoming more laborious and expensive, even for domain experts. Frequently, it is easier to demonstrate a desired behavior rather than to manually engineer it. Imitation learning seeks to enable the learning of behaviors from fast and natural inputs such as task demonstrations and interactive corrections.
However, human generated time-series data is often difficult to interpret, requiring the ability to segment activities and behaviors, understand context, and generalize from a small number of examples. Recent advances in imitation learning algorithms for both behavior cloning and inverse reinforcement learning—especially methods based on training deep neural networks—have enabled robots to learn a wide range of tasks from humans with relaxed assumptions. However, real-world robotics tasks still pose challenges for many of these algorithms and it is important for the research community to identify the greatest challenges facing imitation learning for robotics.
Topics of Interest
This workshop will bring together area experts and student researchers to discuss the advances that have been made in the field of imitation learning for robotics and the major challenges for future research efforts.
The topics to be discussed include, but are not limited to:
- Interactive Imitation Learning
- Multi-modal Imitation Learning
- Deep Inverse Reinforcement Learning and Optimal Control
- Cognitive Models for Learning from Demonstration and Planning
- One/Few-shot Imitation Learning
- Learning by Observing Third-Person Demonstrations
- Learning from Non-Expert Demonstrations