Mobile Robot

Mobile Robot Autonomy heavily relies on localization and mapping. While the SLAM (Simultaneous Localization and Mapping) is commonly modeled as the problem of sparsifying the factor graph (or Belief Net, Bayesian Net, Markov Net/Field, depends on how you model it), establishing the graph correctly in real world environments remains a challenging problem. In order to address the problems, reliable and robust feature detectors and data association algorithms are necessary.

General Purpose Feature Detection.

The detection of features from Light Detection and Ranging (LIDAR) data is a fundamental component of feature-based mapping and SLAM systems. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be very poor.

We describe a general purpose feature detector for LIDAR data that is applicable to virtually any environment. Fig 1 conceptually describes the proposed method. Top: the input image with overlaid local maxima (prior to additional filtering). Circles indicate features, with the radius equal to scale of the feature. Left: image pyramid of input. Right: Corner response pyramid, where local maxima indicate a feature.

Our methods adapt classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. Our resulting method is capable of identifying stable features at a variety of spatial scales and produces uncertainty estimates for use in a state estimation algorithm. We present results on standard datasets, including Victoria Park and Intel Research Center (both 2D), and the MIT DARPA Urban Challenge dataset (3D) (Fig. 2).

Fig. 1. Multi-scale feature extraction from LIDAR data. Our method rasterizes LIDAR data and applies the Kanade-Tomasi corner detector to identify stable and repeatable features.

Fig. 2. 3D Scan Rasterization. Left: a Velodyne scan with points colored according to Z height. Right: rasterized image with superimposed extracted features and corresponding uncertainties. 3D LIDAR data was rasterized by considering the range of Z values in each cell of a polar grid.

Detailed description of the proposed method can be found here: Extracting general-purpose features from LIDAR data, and the evaluation of feature uncertainties is here: A General Purpose Feature Extractor for Light Detection and Ranging Data.

A video shows how the proposed method works is here.

LIDAR feature extraction has following challenges (Fig. 3) that decrease feature detection precision and reliability:

  • Sensor noise. LIDAR data is inevitably contaminated by noise. This noise, if not managed, can create false positives.
  • Discretization error. Sampling the world at a fixed angular resolution causes distant parts of the world to be sampled very coarsely. This range-dependent resolution can make it difficult to recognize the same environment from different ranges.
  • Missing data. Occlusions and reflective objects result in gaps in sensor data. A feature detector must be robust to these gaps.

In order to further improve computational efficiency, detection precision and repeatability, we proposed a structure tensor based feature detector that contains following 3 steps:

  • Pretreatment. We group raw observations into contours and smooth points in each contour with a probabilistically rigorous method. We then compute the surface normals along the contours.
  • Candidate detection. We slide a circular window around the contours, computing the structure tensor of the surface normals within the window. The minimum eigenvalue of the structure tensor measures the feature strength; strong features become feature candidates.
  • Candidate suppression.. We reject feature candidates in the vicinity of stronger candidates in order to reduce feature-matching confusion.

Fig. 5. Multi-scale normal structure tensors feature extraction. Red circles indicate extracted features; size of circles indicates the scale of the feature. Blue triangles denote robots; yellow lines denote current observations and yellow points are accumulated observation points.

Fig. 3. Challenges in feature extraction from LIDAR data. Gray shapes denote obstacles and blue points denote observations. Three problems are indicated with arrows.

Fig. 4. Overview of the proposed method. Blue points are raw observations and red points denote the smoothed counterparts; yellow lines denote observation angles and colored short lines are composed of grids; small gray ellipses denote observation uncertainties; red circles indicate extracted features, while blue and gray circles denote feature candidates suppressed for by strength comparison and distance comparison, respectively. Numbers denote feature strengths and corresponding scales

Detailed description of the proposed method can be found here: Structure tensors for general purpose LIDAR feature extraction.

A video shows how the proposed method works is here.

Posterior Based Data Association

One of the fundamental challenges in robotics is data-association: determining which sensor observations correspond to the same physical object. A common approach is to consider groups of observations simultaneously: a constellation of observations can be significantly less ambiguous than the observations considered individually. The Joint Compatibility Branch and Bound (JCBB) test is the gold standard method for these data association problems. But its computational complexity and its sensitivity to non-linearities limit its practical usefulness. We propose the Incremental Posterior Joint Compatibility (IPJC) test. While equivalent to JCBB on linear problems, it is significantly more accurate on non-linear problems. When used for feature-cloud matching (an important special case), IPJC is also dramatically faster than JCBB. We demonstrate the advantages of IPJC over JCBB and other commonly-used methods on both synthetic and real-world datasets.