Spotlights and posters

11:30-12:00 Lightning talks

12:00-14:00 Lunch and poster session

Amanda Mahony, Hadrien Bride, Jin Song Dong, Zhé Hóu, Brendan Mahony, Martin Oxenham

A Trusted Goal Reasoning and Planning Framework for Long Term Autonomy

While AI techniques have found many successful applications in autonomous systems, many of them permit behaviours that are difficult to interpret and may lead to uncertain results. We follow the “verification as planning” paradigm and propose to use model checking techniques to solve planning and goal reasoning problems for autonomous systems. We present our preliminary result as a framework called Goal Reasoning And Verification for Independent Trusted Autonomous Systems (GRAVITAS) and discuss how it helps provide formally verified plans in a dynamic environment.


Since large-scale of point cloud is widely utilized in the autonomous vehicles, it becomes important to handle a huge amount of point cloud data. Moreover, the point cloud data for vehicle localization should be processed in real time. Therefore, this paper proposes an easy and efficient point cloud map management scheme for vehicle localization using image format. This paper also introduces a method to load the map in real time according to the location of the vehicle. The proposed method was verified using the large-scale point cloud data provided by Singapore Land Authority (SLA). Then, the performance of the map was confirmed by localization of the vehicle using the generated map.


Changing environmental conditions such as weather, time-of-day and season affect the appearance of a place, rendering appearance-based place recognition challenging in long-term applications. While appearance is subject to change under these conditions, the 3D structure of a place remains mostly similar. In this workshop presentation, we talk about methods that perform place recognition from 3D structure only and show how they are more robust to appearance changes than methods that are based on appearance only. More specifically, we focus on sparse and semi-dense, rather than fully dense structure. Not relying on fully dense structure enables data acquisition from vision only.


Giseop Kim and Byungjae Park and Ayoung Kim: Learning Scan Context toward Long-term LiDAR Localization

We present a robust Light Detection and Ranging (LiDAR)-based place descriptor called Scan Context Image (SCI) and a localization method that uses this representation for long-term simultaneous localization and mapping (SLAM). We formulate localization as a conventional supervised classification problem using convolutional neural network (CNN), where a gridded place is considered as a single class and SCIs acquired in that place are the corresponding data to the class. The small (three convnets in our work) network is trained with only a single sequence from a single date, which has a total of 579 places and an average 26 data per class. Despite these constraints, we show that a learned network achieves enough performance (top 5 average accuracy is over 90%) on 13 other test sequences from different dates over 1 year with varying environmental conditions.


Younggun Cho, Jinyong Jeong, Youngsik Shin and Ayoung Kim: DejavuGAN: Multi-temporal Image Translation toward Long-term Robot Autonomy

In this paper, we present a multi-temporal image translation network for long-term autonomy. Long-term navigation often suffers from substantial appearance changes over time. This discrepancy in appearance yields topological and metrical localization failure. To overcome such limitations, we propose a single and unified network that generates Dejavu of current scenes. The proposed network synthesizes current scenes with target domains, such as night, snow and fog. For the efficient management of database and Dejavu of scenes, we utilize a discriminator to predict attribute differences of current images and database images. In experimental results, we describe synthesized images with various domains.


Marcin Dymczyk, Marius Fehr, Thomas Schneider and Roland Siegwart: Long-term Large-scale Mapping and Localization Using maplab

This paper discusses a large-scale and long-term mapping and localization scenario using the maplab open- source framework . We present a brief overview of the specific algorithms in the system that enable building a consistent map from multiple sessions. We then demonstrate that such a map can be reused even a few months later for efficient 6-DoF localization and also new trajectories can be registered within the existing 3D model. The datasets presented in this paper are made publicly available.


Markus Bajones, Timothy Patten, Michael Zillich and Markus Vincze: Probabilistic Observation Maps for Use in Long-Term Human-Robot Interactions

To sustain long-term interaction between a human and robot the machine needs to show the ability to learn personal preferences. This can range from the time when a person gets to bed, the room she or he eats dinner in or where certain objects should be. Having access to this kind of information a robot will be able to fulfill a given task in less time and thus to higher satisfaction of the user. In this paper, we introduce Observation maps, a framework to store observations of users, objects and environmental properties. It provides methods to query this information based on their relations towards each other, their chronological occurrence and their type. Our framework stores the full history of all observations and provides high-level planners with methods to reason about a user’s personal preferences on object locations, and his or her schedule.