Invited Speakers

The following invited speakers are confirmed to present at the workshop: 

Paul Newman, University of Oxford, UK

Learning Your Way - 1000km of Vision Only Localisation with Place Dependent Features

We have been working for a while now on trading everywhere mediocrity (point features) for place specific excellence (use once bespoke features). Its an interesting application of massive overfitting that only makes sense when the intent is to build a machine that nails localisation in a specific, albeit large, workspace. In this talk I will critique our progress and discuss the impact of this feature choice other autonomy system components and the navigation pipeline architecture. We will analyse system performance over 1000km of road data across the spectrum of lighting and weather conditions. We hope to have some surprising results...

Wolfram Burgard, University of Freiburg, Germany

Techniques for Lifelong Navigation of Mobile Robots

In this talk we will discuss different approaches for mobile robots acting autonomously over long periods of time in changing environments. A particularly relevant topic is the robustness to visual appearances in vision-based localization methods. We will present different approaches to robust vision-based localization and vision-based SLAM in environments across different seasons. In addition, we will present methods for laser-based navigation in dynamic environments, which require specific representations for properly utilizing and reasoning about the potential dynamic aspects.

Joydeep Biswas, UMass Amherst, US

Autonomous Mobile Robot Perception for Changing Environments

We seek the ultimate goal of having autonomous service mobile robots permanently deployed in real human environments, performing tasks robustly and accurately on demand. Robust and accurate perception is an integral part of this problem. Human environments have frequent changes, unexpected occlusions, and often dense crowds of dynamic obstacles. This talk will cover novel models of how to represent such challenging environments while accounting for their non-static nature, how to perform inference in real time using such models and a Varying Graphical Network to find and track correlations among observations and the models, and how to build and update such models over time including all past observations of all deployed robots in an environment. We present results of how such approaches have allowed the CoBots at CMU to autonomously traverse more than 1,000Km to date since 2010, and continue to autonomously serve the occupants at UMass as well as at CMU. 

Kanna Rajan, Norwegian University of Science and Technology, Norway / University of Porto, Portugal

Coordinated Observations for Persistence in the Deep Sea: Experiments and Lessons Learned

We explore the use of AI-based methods for command/control of low-cost autonomous underwater vehicles (AUVs) and unmanned aerial vehicles (UAVs) using a mix of fully autonomous and mixed-initiative approaches. Our work is in the context of a highly inter-disciplinary science/engineering experimentation driven by scientific goals often working with biologists in the open sea. We explore the boundaries of decision-theoretic and control-theoretic approaches and suggest how a blend of these has proven to have a substantial impact on the scientific discipline.

Reid Simmons, Carnegie Mellon University, US 

Robust Autonomy

Autonomous robots are usually good at achieving their goals. Except when they aren't. Unfortunately, most current robot systems are not very good at detecting the difference between the two situations. This talk concerns the issue of having autonomous robots reliably detect unanticipated situations and recover from them. To deal with this, we are investigating probabilistic techniques that monitor the robot's task performance to detect differences between modeled and observed behavior, and then use a hierarchy of models to replan in the most appropriate model for that situation. Statistically significant differences are used to update the models, so that they more accurately reflect reality. Active learning is used to try to refine models more proactively. We show how these techniques apply in various domains, including indoor navigation, forklifts, space systems, and robot soccer, leading to more robust (and safer) systems.

Nick Hawes, University of Birmingham, UK 

Long-term Autonomy in Everyday Environments: An Overview of the STRANDS Project

The STRANDS Project is an ambitious EU project with the aim of producing an intelligent mobile robot capable of 120 days of uninterrupted autonomy in everyday, uncontrolled environments. The project is now at a half-way point, and has recently achieved a milestone of 30 days autonomy in care and security environments. This talk will developments in the STRANDS Project which have contributed towards this milestone, and the challenges in long-term autonomy that have yet to be overcome.

Tim Barfoot, University of Toronto, CA

Experience is the Best Teacher: Using Past Performance to Continually Improve Localization, Terrain Assessment, and Path Tracking for Visual Route Following

For the last several years, we have been working on a vision-only route following technique for mobile robots, called Visual Teach & Repeat (VT&R). This approach is successful in practice because it (i) exploits human experience in route definition, (ii) avoids the need to build a global map of the world, and (iii) plays to the strengths of computer vision by keeping the viewpoints the same between teach and repeat. However, to scale up to real-world operations, we need to be able to repeat routes as quickly as possible, in the presence of dynamic obstacles, and of course deal with visual change (lighting, weather, etc.). In this talk, I will describe our latest approach, VT&R 2.0, that leverages past route-driving experience not only to improve visual localization, but also to improve our ability to detect obstacles visually in difficult scenarios (e.g., tall grass), and even track paths more accurately and more quickly over time. I will present field test results using our 1000 kg Grizzly mobile robot in a variety of outdoor, offroad scenarios.