Contributed Papers


Donghwa Lee, Jongdae Jung and Hyun Myung, Pose Graph-Based RGB-D SLAM in Low Dynamic Environments

Abstract - The simultaneous localization and mapping (SLAM) problem usually has been handled in static environments. This assumption is valid for the performance comparison of a variety of SLAM algorithms, but the real world is not a static environment. In recent years, many SLAM solutions have been proposed for use in dynamic environments. But most of them rely on expensive sensors such as a laser range finder (LRF). In highly dynamic environments, vision sensors can easily detect moving objects. However, if the pose of objects change over large intervals, it is difficult to recognize these movements using vision sensors alone. This problem was defined in and was referred to as a low dynamic environment. In the present study, we propose a novel SLAM method for low dynamic environments using a pose graph and a RGB-D (red-green-blue depth) vision sensor.

Extended Abstract


Yasir Latif, Guoquan Huang, John J. Leonard and Jose Neira, Applying Sparse L1-optimization to problems in robotics

Abstract - Sparse L1-optimization techniques have received a lot of attention in the signal processing and computer vision communities, where they have been applied to problems such as denoising, deblurring, and face recognition. Using L1-objective to solve an optimization problem has been shown to induce sparsity. Moreover, the problem is convex allowing a global minimum solution. Well studied techniques and solvers exist that allow efficient solutions for the optimization problem by posing it as either a Linear Problem (LP) or taking advantage of the sparse nature of the problem, i.e., homotopy based methods. In this work, we provide an overview of this sparse L1-formulation and apply it to various problems in robotics including loop closure detection, place categorization and topological SLAM.

Syed Atif Mehdi and Karsten Berns, Life-Long Learning of Daily Human Routine in Home Environments

Abstract—Advancement in technology has transformed large and huge robots from industry to small and companion robots for humans at their own homes. These robots are now helping people in performing various tasks. The robots can become more useful and better companion, if they can learn daily routine of the inhabitants and can adapt themselves according to the learned routine. In this paper, a methodology for learning the pattern of human presence in different rooms of the home environment at different times of the day has been presented. The promising experimental results show that using the developed methodology, a mobile robot autonomously learns the hourly location based routine of the person.

Extended Abstract


Dimitrios G. Kottas, Ryan C. DuToit, Ahmed Ahmed, Chao X. Guo, Georgios Georgiou, Ruipeng Li, and Stergios I. Roumeliotis, A Resource-aware Vision-aided Inertial Navigation System for Wearable and Portable Computers

In this paper, we address the problem of deploying Vision-aided Inertial Navigation Systems (VINS) on resource constrained platforms such as cell phones and wearable computers. In particular, we consider the case of a sliding-window extended Kalman filter (EKF)-based estimator and focus on optimizing its use of the available processing resources. This is achieved by first classifying visual observations based on their feature-track length and then assigning different portions of the CPU budget for processing subsets of the observations belonging to each class. Moreover, we introduce a processing strategy where “spare” CPU cycles are used for (re)-processing all or a subset of the observations corresponding to the same feature, across multiple, overlapping, sliding windows. This way, feature observations are used by the estimator more than once for improving the state estimates, while consistency is ensured by marginalizing each feature only once (i.e., when it moves outside the camera’s field of view). The ability of the proposed feature classification and processing approach to adjust to the availability of processing resources is demonstrated experimentally on a Samsung S4 cell phone and on the Google Glass where VINS operates in real-time while occupying only half of the CPU cycles of one of the ARM processor’s cores.

Extended Abstract - TBC


Zachary Taylor and Juan Nieto, Parameterless Automatic Extrinsic Calibration of Vehicle Mounted Lidar-Camera Systems

This paper presents a new method for automated extrinsic calibration of multi-modal sensors. In particular the paper presents and evaluates a pipeline for calibration of 3D lidar and cameras mounted on a sensor vehicle. Previous methods for multi-modal sensor calibration find the optimal parameters by aligning a set of observations from the different sensor modalities. The main drawback of these methods is the need for a good initialisation in order to avoid converging into a local minima. Our approach eliminates this limitation by combining external observations with motion estimates obtained with the individual sensors. The method operates by utilizing structure from motion based hand-eye calibration to constrain the search space of the optimisation.

Extended Abstract

Ċ
Liz Murphy,
Apr 20, 2014, 9:58 PM
Ċ
Liz Murphy,
May 7, 2014, 11:05 AM
Ċ
Liz Murphy,
Apr 20, 2014, 9:55 PM
Ċ
paper.pdf
(1218k)
Liz Murphy,
Apr 20, 2014, 9:56 PM
Comments