Advances & Challenges in Imitation Learning for Robotics
Virtual Workshop for RSS2020
July 12, 2020
Thanks for participating!
Use this form to contribute your question for discussion in our live sessions.
We now have a sli.do channel for the workshop.
Pre-recorded talks are now available on our website!
As robots and other intelligent agents increasingly address complex problems in unstructured settings, programming their behavior is becoming more laborious and expensive, even for domain experts. Frequently, it is easier to demonstrate a desired behavior rather than to manually engineer it. Imitation learning seeks to enable the learning of behaviors from fast and natural inputs such as task demonstrations and interactive corrections.
However, human generated time-series data is often difficult to interpret, requiring the ability to segment activities and behaviors, understand context, and generalize from a small number of examples. Recent advances in imitation learning algorithms for both behavior cloning and inverse reinforcement learning—especially methods based on training deep neural networks—have enabled robots to learn a wide range of tasks from humans with relaxed assumptions. However, real-world robotics tasks still pose challenges for many of these algorithms and it is important for the research community to identify the greatest challenges facing imitation learning for robotics.
This workshop will bring together area experts and student researchers to discuss the advances that have been made in the field of imitation learning for robotics and the major challenges for future research efforts.
The topics to be discussed include, but are not limited to: