10:00 - 10:30 | Poster Session and Coffee Break | #2, #3, #5, #7, #8, #9, #10, #11, #12, #16, #17
12:00 - 13:30 | Lunch Break
14:30 - 15:00 | Poster Session and Coffee Break | #1, #4, #6, #13, #14, #15, #18, #19, #20, #21
Invited Talk Abstracts
09:00 - 09:30 | Drew Bagnell | There Is No Deep Reinforcement Learning
We'll explore the exciting recent advances in Reinforcement Learning, and note comparisons and contrasts with part achievements in the field. We take note that Reinforcement Learning and neural networks have been closely tied for decades (i.e., "Neuro-dynamic Programming"), and that advances an RL and advances in neural architectures have been nearly completely orthogonal. We identify key issues that remain incompletely addressed in RL (with neural networks or otherwise), from stability of bootstrapping to the use of value functions and policies. In the last part of the talk, we zoom into one of these still-open issues critical for RL sample efficiency in robotics. We attempt to understand the trade-offs between black-box (parameter space) and white-box (action space) explorations. We identify a set of key parameters via theory and experiment that suggest when white-box methods are likely to be more sample efficient and visa-versa. This is joint work with Anirudh Vemula, Wen Sun, Max Likhachev, and others.
09:30 - 10:00 | Angela Schoellig | Combining Models and Data for Enhanced Robot Control and Decision Making
In contrast to computers and smartphones, the promise of robotics is to build devices that can physically interact with the world. The interaction of robots with the physical world may be as simple as moving on roads or in the air, and can be as complex as physically collaborating with humans. Envisioning robots to work in human-centered and interactive environments challenges current robot algorithm design, which has been largely based on a-priori knowledge about the system and its environment. In this talk, I will show how we combine models and data to achieve safe and high-performance robot behavior in the presence of uncertainties and unknown effects. Our work focuses on learning robot control and decision making strategies, and enables robots to adapt their behavior as they move in the world. In this talk, I will highlight how we use "structure" and prior knowledge to (i) appropriately place the learning in the overall system architecture (including choosing the inputs and outputs of the learning module), (ii) enable online learning (i.e., enable the robot to move in the real world and start gathering data), (iii) design data-efficient algorithms, and (iv) provide provable performance and safety guarantees for the learning. An important characteristic of our work is that we try to identify where unknown effects affect the robot behavior but that we do not make any strong assumptions about the unknown effects themselves. We demonstrate our algorithms on self-flying and -driving vehicles, as well as on mobile manipulators. More information and videos at: www.dynsyslab.org.
Recent approaches in robotics follow the insight that perception is facilitated by interaction with the environment. First, this creates a rich sensory signal that would otherwise not be present. Second, knowledge of the sensory dynamics upon interaction allows prediction and decision-making over a longer time horizon. To exploit these benefits of Interactive Perception for capable robotic manipulation, a robot requires both: methods for processing rich, sensory feedback and feedforward predictors of the effect of physical interaction. In the first part of this talk, I will present a method for processing rich, sensory feedback to perform motion-based segmentation of an unknown number of simultaneously moving objects. The underlying model estimates dense, per-pixel scene flow that is then followed by clustering in motion trajectory space. We show how this outperforms state-of-the-art in scene flow estimation and multi-object segmentation. In the second part, I will present a method for predicting the effect of physical interaction with objects in the environment. The underlying model combines an analytical physics model and a learned perception part. In extensive experiments, we show how this hybrid model outperforms purely learned models in terms of generalisation. In both projects, we found that introducing structure greatly reduces training data, eases learning and provides extrapolation. Based on these findings, I will discuss the role of structure in learning for robot manipulation.
Being able to do reinforcement learning from scratch on real robots is the long-term goal on our research agenda. A particular challenge in these scenarios is that methods need to be highly data-efficient, since data-collection on real robots is time intensive and often expensive. Bringing in structure, priors or models is often the way of choice to overcome this problem. I will discuss several examples of real world applications of RL where bringing in structure played a crucial role for the solution. I will then also discuss downsides of these approaches, and show some recent developments that try to reduce the amount of prior knowledge to a minimum, while still be able to solve complex tasks from scratch.
13:30 - 14:00 | Marc Toussaint | Inference vs Optimization Formulations of Planning and Imitation Learning
Formulating planning/control, learning and perception coherently as probabilistic inference is intriguing. But in practice, optimization formulations seem to offer somewhat different ways to express the structure, e.g. in sequential manipulation planning or learning manipulations from demonstration. In this talk I discuss such examples and eventually raise questions on how to reconcile this with a fully probabilistic formulation.
Traditional convolutional networks exhibit unprecedented robustness to intraclass nuisances when trained on big data. Geometric transformations like rotations has been tackled with data augmentation, too. Several approaches have shown recently that data augmentation can be avoided if networks are structured such that feature representations are transformed the same way as the input, a desirable property called equivariance. We show in this talk that global equivariance can be achieved for the case of 2D scaling, rotation, and translation as well as 3D rotations. We show state of the art results using an order of magnitude lower capacity than competing approaches.
Real-time computational perception is a crucial building block of modern robots and autonomous vehicles. When robotics technologies are used in safety-critical applications (e.g., intelligent transportation, disaster response, military applications of drones) failures in robot perception may result in human casualties. Therefore, it is of paramount importance to develop verification methods as well as perception techniques with provable performance guarantees, that can enable failure detection and mitigation. In this talk, I discuss our work on fast provably-correct solvers for Simultaneous Localization and Mapping (SLAM) and cover recent extensions to the case of SLAM with outliers resulting from perceptual failures. Then, I expand the discussion beyond SLAM, and explore the opportunity of using “structure” and model-based techniques to equip learning-based perception methods with formal performance guarantees.
Spotlight Contributed Talks
10:30 - 10:35 | Lars Kunze | #1 Reading between the Lanes: Road Layout Reconstruction from Partially Segmented Scenes
10:35 - 10:40 | Nima Fazeli | #4 Towards High Fidelity Stochastic Simulators with Data-Augmented Models
10:40 - 10:45 | Yasir Latif | #6 Structure Aware SLAM using Quadrics and Planes
10:45 - 10:50 | Qiaojun Feng | #13 Dense Spatial Segmentation from Sparse Semantic Information
10:50 - 10:55 | Marcus Pereira | #14 Scalable Path Integral Networks
10:55 - 11:00 | Tatiana Lopez-Guevara | #15 To Stir or Not to Stir: Online Estimation of Liquid Properties for Pouring Actions
- Lars Kunze, Tom Bruls, Tarlan Suleymanov and Paul Newman. Reading between the Lanes: Road Layout Reconstruction from Partially Segmented Scenes
- Gao Tang and Kris Hauser. Discontinuity-Sensitive Optimal Trajectories Learning by Mixture of Experts
- Noémie Jaquier, Leonel Rozo and Sylvain Calinon. Geometry-aware Robot Manipulability Transfer
- Nima Fazeli and Alberto Rodriguez. Towards High Fidelity Stochastic Simulators with Data-Augmented Models
- Connor Schenck and Dieter Fox. SPNets: Modeling Position Based Fluids using Smooth Particle Networks
- Mehdi Hosseinzadeh, Yasir Latif, Trung Pham, Niko Suenderhauf and Ian Reid. Structure Aware SLAM using Quadrics and Planes
- Maria Bauza, Francois Hogan and Alberto Rodriguez. Learning to Push: A Data-Efficient Approach to Precise and Controlled Pushing
- Gowtham Garimella, Joseph Funke, Chuang Wang and Marin Kobilarov. Neural Network Modeling for Steering Control of an Autonomous Vehicle
- Mahmoud Hamandi, Mike D'Arcy and Pooyan Fazli. Learning to Navigate Like Humans
- Alexander Broad, Ian Abraham, Todd Murphey and Brenna Argall. Structured Neural Network Dynamics for Model-based Control
- Muhammad Asif Rana, Mustafa Mukadam, S. Reza Ahmadzadeh, Sonia Chernova and Byron Boots. Robot Skill Learning from Demonstrations in Cluttered Environments
- Angel Daruna, Zsolt Kira and Sonia Chernova. Towards Scalable Semantic Reasoning Frameworks for Robotic Systems
- Qiaojun Feng, Yue Meng and Nikolay Atanasov. Dense Spatial Segmentation from Sparse Semantic Information
- Marcus Pereira, David Fan, Gabriel Nakajima An and Evangelos Theodorou. Scalable Path Integral Networks
- Tatiana Lopez-Guevara , Rita Pucci, Nicholas Taylor, Michael Gutmann, Subramanian Ramamoorthy and Kartic Subr. To Stir or Not to Stir: Online Estimation of Liquid Properties for Pouring Actions
- Abhinav Valada, Noha Radwan and Wolfram Burgard. Incorporating Semantic and Geometric Priors in Deep Pose Regression
- Sanket Gaurav and Brian Ziebart. Training Inverse Reinforcement Learning Models for Goal Prediction
- Matthew Sheckells, Gowtham Garimella and Marin Kobilarov. Robust Policy Search with Applications to Safe Vehicle Navigation
- Anirudh Vemula, Wen Sun and J. Andrew Bagnell. Exploration in Action Space
- Wen Sun, Geoffrey Gordon, Byron Boots and J. Andrew Bagnell. Dual Policy Iteration
- Oleh Rybkin, Karl Pertsch, Andrew Jaegle, Konstantinos G. Derpanis and Kostas Daniilidis. Unsupervised Learning of Sensorimotor Affordances by Stochastic Future Prediction