Workshop on Learning from Demonstrations for High Level Robotic Tasks
Robotics: Science and Systems - June 2018 - Pennsylvania, USA
Abstract
Many real-world tasks require robots to solve complex decision-making problems and be capable of dexterous low level control to enable seamless interaction with the surrounding environment. Learning from Demonstrations (LfD) can greatly reduce the difficulty of learning in such settings by making use of expert demonstrations. These demonstrations provide snapshots of near-optimal behaviours, offering guidance for the learning process and alleviating the need to start from scratch or manually engineering parts of the solution. LfD has been popular in the past within robotics, neuroscience, natural language processing and cognitive science, but is now seeing a resurgence in the machine learning community, particularly with the advent of deep learning techniques.
In this workshop, we plan to cover various techniques for LfD and invite a discussion into its future applications in robotics, towards solving long time horizon tasks requiring hierarchical decision making from multi-modal input (e.g. visual, haptic, language and auditory). We have invited well-known researchers in machine learning, cognitive science and robotics with the aim to encourage collaboration and share new ideas across this multidisciplinary field.
Topics
- Learning from high-dimensional demonstrations
- Deep inverse reinforcement learning and optimal control
- Predicting behavior from high-dimensional observations
- Learning from multiple sensory modalities
- High-dimensional knowledge transfer for sequential planning
- Cognitive models for learning from demonstration and planning
- One/few-shot imitation learning
- Learning by observing third-person demonstrations
Schedule
9:00 - 9:15
9:15 - 9:45
9:45 - 10:15
10:15 - 11:00
10:00 - 11:30
11:30 - 12:00
2:00 - 2:30
2:30 - 3:00
3:00 - 4:00
4:00 - 4:30
4:30 - 5:00
5:00 - 5:30
Introduction
Chelsea Finn
Byron Boots
Posters Teasers
Posters/Coffee
Yevgen Chebotar
Maya Cakmak
Drew Bagnell
Posters/Coffee
Anca Dragan
Jeannette Bohg
Speakers Panel
Introductory remarks
Poster presenters are invited to give lightning talks
Poster presentations
Poster presentations
Call for papers
easychair.org/conferences/?conf=rsswlfd18
Paper format: Full RSS paper format, page limit is 8 pages (excluding citations).
Submissions are not double blind i.e. we will see author names.
Important Dates
Friday May 25th Paper Submission Deadline
Wednesday May 30th Paper Acceptance Notification
Friday June 29th Workshop
Accepted Papers
Learning to Use a Ratchet by Modeling Spatial Relations in Demonstrations
Li Yang Ku*, Scott Jordan*, Julia Badger, Erik G. Learned-Miller and Roderic A. Grupen
Towards Specification Learning from Demonstrations
Ankit Shah, Julie Shah
High Level Representation of Kinesthetically Learned Motions for Human-Robot Collaborative Tasks
Heramb Nemlekar, Max Merlin, John Chiodini, Zhi Li
Chris Paxton, Yotam Barnoy, Kapil Katyal, Raman Arora and Gregory D. Hager
David Kent, Siddhartha Banerjee, Sonia Chernova
Sim2Real Viewpoint Invariant Visual Servoing by Recurrent Control
Fereshteh Sadeghi, Alexander Toshev, Eric Jang, Sergey Levine
TACO: Learning Task Decomposition via Temporal Alignment for Control
Kyriacos Shiarlis, Markus Wulfmeier, Sasha Salter, Shimon Whiteson, and Ingmar Posner
Where Do You Think You’re Going?: Inferring Beliefs about Dynamics from Behavior
Siddharth Reddy, Anca D. Dragan, Sergey Levine
SCHEDULE FOR LIGHTNING TALKS
Schedule: 10:15-11:00
10:15-10:20: Learning to Use a Ratchet by Modeling Spatial Relations in Demonstrations
10:20-10:25: Towards Specification Learning from Demonstrations
10:25-10:30: High Level Representation of Kinesthetically Learned Motions for Human-Robot Collaborative Tasks
10:30-10:35: Visual Robot Task Planning
10:35-10:40: Learning Real-World Sequential Decision Tasks with Abstract Markov Decision Processes and Demonstration-Guided Exploration
10:40-10:45: Sim2Real Viewpoint Invariant Visual Servoing by Recurrent Control
10:45-10:50: TACO: Learning Task Decomposition via Temporal Alignment for Control
10:50-10:55: Where Do You Think You’re Going?: Inferring Beliefs about Dynamics from Behavior