Autonomous robots need the ability to perceive and model their environment and to make appropriate decisions in complex situations on their own. The complexity results from the high-dimensional perceptions, the large number of possible actions and the uncertainty about the state of the world. Localization is an integral part of autonomous navigation systems and in this talk, we will present recent an ongoing developments at the University of Bonn with respect to visual localization across substantial appearance changes. The approaches incorporate image sequence matching for dealing with substantial images changes, hashing for efficient relocalization, heuristic search for online operation.
Visual localization methods aim to estimate the position and orientation from which an image or video was taken with respect to a scene representation, e.g., a 3D model or a database of images. Thus, they are a fundamental component for robotics as they allow agents such as self-driving cars or drones to localize themselves in and autonomously navigate through their environments. A crucial step in each visual localization algorithm is data association, i.e., relating the appearance of the input image or video with the appearance information stored in the scene representation. Changes in the environment, e.g., illumination changes during the course of a day, strongly affect the appearance of a scene. As a result, data association becomes harder and visual localization methods fail if these changes are too strong. In this talk, we discuss the impact of environment changes on visual localization. More specifically, we show that changes between day and night cause significant problems for existing approaches. In addition, we discuss scene representations that are based on semantic scene information rather than scene appearance and are thus more invariant to environmental changes.
10:00 Carl Wellington, UBER Advanced Technologies - Long-term deployment of self-driving cars and trucks
To be announced.
John J. Leonard is Samuel C. Collins Professor of Mechanical and Ocean Engineering in the MIT Department of Mechanical Engineering. He is also a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research addresses the problems of navigation and mapping for autonomous mobile robots. He holds the degrees of B.S.E.E. in Electrical Engineering and Science from the University of Pennsylvania (1987) and D.Phil. in Engineering Science from the University of Oxford (1994). He is an IEEE Fellow (2014). Professor Leonard is currently on sabbatical leave from MIT working on research for active safety and autonomous driving for Toyota Research Institute.
Knowing where you are for ever, irrespective of weather and lighting remains an intriguing challenge for vision based systems and without doubt such system is invaluable for long term autonomy. In this talk about an ongoing thread of work to produce metric visual localisers that are a) data efficient b) robust to really quite extraordinary scene change. Our goal is to have one photo pair of every place and always, what ever the weather, whatever the time of day, be able to localise relative to it metrically. We can’t quite do get by with one image for every place, but with our recent work leveraging appearance transfer we are getting very close.
We seek the ultimate goal of having self-sufficient autonomous service mobile robots working in human environments, performing tasks accurately and robustly. Successfully deploying such robots requires addressing several challenges in mapping, localization, navigation, and autonomous exception recovery. The key to robust execution in all these sub-problems is to expect and anticipate changes in the environment, the deployment conditions, and algorithmic limitations. In this talk, I shall present our recent research along two broad themes: algorithms for robust navigation of long-term autonomous mobile robots, and algorithms to ensure that they remain autonomous over extended periods of time. In particular, I shall present several algorithms for long-term mapping, localization, joint perception and planning, and autonomous sensor calibration. These algorithms have enabled our robots to autonomously traversal more than a thousand kilometers while performing tasks in multiple universities.
Collaborative robots, or cobots, are a class of machines intended to physically interact with humans in a shared workspace. This is a broad definition, which encompasses a wide variety of hardware systems and application domains. The cobot industry Is expected to dramatically expand in the next five years and beyond, as robots become partners in our everyday lives. However, before this transition can take place, significant challenges remain to be solved. In particular, most cobots will need to operate alongside non-experts in situations where regular servicing or adjustment (e.g., every day) is impractical or impossible. How can we build machines that are capable of long-term autonomy (on the order of days, weeks, or months), that is, without a human-in-the-loop at every step?
In this talk, I will discuss our experiences with two collaborative robots, within the context of long-term autonomy. First, I will review our work on developing online, anytime self-calibration algorithms for a mobile manipulator operating in dynamic environments. This type of introspective self-calibration is necessary to maintain optimal task performance despite changes to the platform caused by wear and tear, for example. I will then discuss our research in the area of assistive autonomy, specifically the design of a self-driving wheelchair for mobility impaired users. The chair has been engineered to operate for long periods of time without the intervention of a technician (a necessary feature to make it marketable), while still being low cost - I will examine some of the difficult design choices involved in creating such a device. Finally, I will review several ideas for ways in which the community can push towards building even more robust systems capable of independent operation.
The STRANDS Project is an ambitious EU project with the aim of producing an intelligent mobile robot capable of long-term, uninterrupted autonomy in everyday, uncontrolled environments. The project, which concluded during 2017, has achieved a total of one year autonomy in care and security environments. This talk will overview developments in the STRANDS Project which have contributed towards this milestone, and the key technology that allowed the STRANDS robot to represent the environment structure and how it changes over time.