Structured Approaches to Robot Learning for Improved Generalization

(Virtual) RSS Workshop | July 13, 2020; 9 AM - 1 PM PST

Update: Recorded talks by the speakers are now available on our youtube channel.

Update: You can now add your questions for the speaker Q&A sessions to this sli.do link.

Update: List of accepted papers and spotlight talks from all our poster presenters are now online here.

Update: Live panels, Q&A with the speakers and poster sessions will take place on July 13 via Zoom during RSS. These will be streamed through the RSS PheedLoop system.

Description

Recent advances in machine learning techniques, the emergence of deep learning, access to big data and powerful computing hardware has led to great strides in the state-of-the-art in robotics and artificial intelligence. Many of these learning based methods tend to be black-box, eschewing much of the careful state estimation, algorithm design and modular structuring of traditional robotics pipelines in favor of general function approximators that rely primarily on big data and near infinite computation. While this has led to great successes in learning and solving complex tasks directly from raw sensory information (e.g. autonomous driving), we are still struggling to replicate the in-domain generalization, knowledge transfer, interpretability and safety capabilities inherent in traditional robotic systems.

Can we bridge this gap between traditional robotics pipelines and modern learning based methods? Can we combine these paradigms in a way that we retain the strengths of both? These are some of the questions we want to explore in this workshop. We plan to bring together researchers in robotics, computer vision and machine learning to investigate, at the intersection of these paradigms, structured approaches to robot learning and how they can enable us to generalize knowledge across tasks.

Our notion of “structure” is very general. In the context of robot learning this can manifest in many ways: as a specific deep architecture, a training approach, an intermediate representation, a loss function, etc. A special emphasis will be on methods that tightly integrate insights from both paradigms and are demonstrably applicable in the real-world.


Topics of interest include, but are not limited to:

  • Structured inference and learning for robotics

  • Deep learning with structure and priors

  • Learning structured representations for perception, planning and control

  • Integrating learning and model-based robotics

  • Structured losses and semi/self-supervised learning

  • Transfer and multi-task learning

  • Reinforcement/Imitation learning using domain knowledge

  • Autonomous navigation, mobile manipulation with structured learning

  • Structured optimization with deep learning and automatic differentiation

  • Deep learning with graphical models

Speakers

Recorded talks by the speakers will be available on our youtube channel

Thomas Funkhouser (Princeton)

Byron Boots (UW / NVIDIA)

Jeannette Bohg (Stanford)

Jitendra Malik (Berkeley / FAIR)

Karol Hausman (Google Brain)

Leslie Kaelbling (MIT)

Pieter Abbeel (Berkley / Covariant)

Raquel Urtasun (UToronto / Uber)

Schedule (July 13th, All times are in PST)

Live panel discussions and poster sessions will be streamed through RSS PheedLoop

09:00 - 09:05 | Introduction

09:05 - 10:00 | Panel session 1 (Jitendra Malik, Karol Hausman, Jeannette Bohg, Leslie Kaelbling)

+ Authors of papers from Poster session 1

10:00 - 11:00 | Poster session 1 (Krishna Murthy Jatavallabhula, Iman Nematollahi, Alina Kloss, Sasha Salter, Nicholas Collins, Michael Lutter, Harshit S Sikchi, Mathew Halm)

11:00 - 12:00 | Panel session 2 (Raquel Urtasun, Thomas Funkhouser, Byron Boots, Pieter Abbeel)

+ Authors of papers from Poster session 2

12:00 - 12:55 | Poster session 2 (Rogerio Bonatti, Mengyuan Yan, Achin Jain, Kristen Morse, Alexander Lambert, Tianwei Ni, Gilwoo Lee, Michael Zhu)

12:55 - 13:00 | Concluding remarks

Submission Details

We solicit up to 4 pages extended abstracts (excluding citations and supplemental material) conforming to the official RSS style guidelines. Submissions can include archived or previously accepted work (please make a note of this in the submission; if necessary we may take this into consideration for the acceptance decision). Reviewing will be single blind. All accepted contributions will be presented in interactive poster sessions. A subset of accepted contributions will be featured in the workshop as spotlight presentations.

Submission link: https://cmt3.research.microsoft.com/RLWSRSS2020

Important Dates

May 31 - Submission deadline (AoE time)

June 19 - Notification of acceptance

June 30 - Camera ready deadline

July 13 - Workshop

Related Workshops

  • ICML 2016 Workshop on “Abstraction in Reinforcement Learning” (designing and learning abstractions)

  • NeurIPS 2017 Workshop on “Hierarchical Reinforcement Learning” (learning hierarchically structured action and state spaces)

  • LLARLA at ICML 2017–2018 (lifelong transfer and meta-learning in reinforcement learning)

  • AutoML at ICML 2017–2018 (meta-learning in domains including reinforcement learning)

  • MetaLearn at NeurIPS 2017–2018 (meta-learning in domains including reinforcement learning)

  • NAMPI at NeurIPS 2016 and ICML 2018 (structure and modularity in domains including reinforcement learning)

  • Workshop on Multi-Task and Lifelong Reinforcement Learning (ICML 2019)

  • Continual Learning (NeurIPS 2018)

  • Task-Agnostic Reinforcement Learning (ICLR 2019)

  • Structure and Priors in Reinforcement Learning (ICLR 2019)

  • RSS 2018 workshop on Learning and Inference in Robotics

  • Robot Learning (NeurIPS 2017/19)

  • ​Structure and Priors in Reinforcement Learning (ICLR 2019)

  • Generative Modeling and Model-based Reasoning (ICML 2019)

Organizers

Arunkumar Byravan (DeepMind)

Markus Wulfmeier (DeepMind)

Franziska Meier (FAIR)

Mustafa Mukadam (FAIR)

Nicolas Heess (DeepMind)

Angela Schoellig (UToronto)

Dieter Fox (UW / NVIDIA)