Structured Approaches to Robot Learning for Improved Generalization

RSS Workshop | OSU, Corvallis | July 13, 2020


Recent advances in machine learning techniques, the emergence of deep learning, access to big data and powerful computing hardware has led to great strides in the state-of-the-art in robotics and artificial intelligence. Many of these learning based methods tend to be black-box, eschewing much of the careful state estimation, algorithm design and modular structuring of traditional robotics pipelines in favor of general function approximators that rely primarily on big data and near infinite computation. While this has led to great successes in learning and solving complex tasks directly from raw sensory information (e.g. autonomous driving), we are still struggling to replicate the in-domain generalization, knowledge transfer, interpretability and safety capabilities inherent in traditional robotic systems.

Can we bridge this gap between traditional robotics pipelines and modern learning based methods? Can we combine these paradigms in a way that we retain the strengths of both? These are some of the questions we want to explore in this workshop. We plan to bring together researchers in robotics, computer vision and machine learning to investigate, at the intersection of these paradigms, structured approaches to robot learning and how they can enable us to generalize knowledge across tasks.

Our notion of “structure” is very general. In the context of robot learning this can manifest in many ways: as a specific deep architecture, a training approach, an intermediate representation, a loss function, etc. A special emphasis will be on methods that tightly integrate insights from both paradigms and are demonstrably applicable in the real-world.

Topics of interest include, but are not limited to:

  • Structured inference and learning for robotics
  • Deep learning with structure and priors
  • Learning structured representations for perception, planning and control
  • Integrating learning and model-based robotics
  • Structured losses and semi/self-supervised learning
  • Transfer and multi-task learning
  • Reinforcement/Imitation learning using domain knowledge
  • Autonomous navigation, mobile manipulation with structured learning
  • Structured optimization with deep learning and automatic differentiation
  • Deep learning with graphical models


Thomas Funkhouser (Princeton)

Byron Boots (UW / NVIDIA)

Jeannette Bohg (Stanford)

Jitendra Malik (Berkeley / FAIR)

Karol Hausman (Google Brain)

Raquel Urtasun (UToronto / Uber ATG) (tentative)

Leslie Kaelbling (MIT)

Pieter Abbeel (Berkley / Covariant)

Tentative Schedule

08:30 - 08:45 | Introduction

08:45 -0 9:15 | Speaker 1

09:15 - 09:45 | Speaker 2

09:45 - 10:15 | Poster Spotlights 1

10:15 - 11:00 | Posters 1 / Coffee

11:00 - 11:30 | Speaker 3

11:30 - 12:00 | Speaker 4

12:00 - 01:30 | Lunch

01:30 - 02:00 | Speaker 5

02:00 - 02:30 | Speaker 6

02:30 - 03:00 | Poster Spotlights 2

03:00 - 04:00 | Posters 2 / Coffee

04:00 - 04:30 | Speaker 7

04:30 - 05:30 | Panel Discussion

Submission Details

We solicit up to 4 pages extended abstracts (excluding citations and supplemental material) conforming to the official RSS style guidelines. Submissions can include archived or previously accepted work (please make a note of this in the submission; if necessary we may take this into consideration for the acceptance decision). Reviewing will be single blind. All accepted contributions will be presented in interactive poster sessions. A subset of accepted contributions will be featured in the workshop as spotlight presentations.

Submission link:

Important Dates

May 14 - Submission deadline (AoE time)

May 28 - Notification of acceptance

June 21 - Camera ready deadline

July 13 - Workshop

Related Workshops

  • ICML 2016 Workshop on “Abstraction in Reinforcement Learning” (designing and learning abstractions)
  • NeurIPS 2017 Workshop on “Hierarchical Reinforcement Learning” (learning hierarchically structured action and state spaces)
  • LLARLA at ICML 2017–2018 (lifelong transfer and meta-learning in reinforcement learning)
  • AutoML at ICML 2017–2018 (meta-learning in domains including reinforcement learning)
  • MetaLearn at NeurIPS 2017–2018 (meta-learning in domains including reinforcement learning)
  • NAMPI at NeurIPS 2016 and ICML 2018 (structure and modularity in domains including reinforcement learning)
  • Workshop on Multi-Task and Lifelong Reinforcement Learning (ICML 2019)
  • Continual Learning (NeurIPS 2018)
  • Task-Agnostic Reinforcement Learning (ICLR 2019)
  • Structure and Priors in Reinforcement Learning (ICLR 2019)
  • RSS 2018 workshop on Learning and Inference in Robotics
  • Robot Learning (NeurIPS 2017/19)
  • ​Structure and Priors in Reinforcement Learning (ICLR 2019)
  • Generative Modeling and Model-based Reasoning (ICML 2019)


Arunkumar Byravan (DeepMind)

Markus Wulfmeier (DeepMind)

Franziska Meier (FAIR)

Mustafa Mukadam (FAIR)

Nicolas Heess (DeepMind)

Angela Schoellig (UToronto)

Dieter Fox (UW / NVIDIA)