Here is a small resume of the research I conducted during my career, organized by topic

Rao-Blackwellized Particle Filters for Grid-Based SLAM

A Rao-Blackwellized particle filter is a mathematical tool for recursive state estimation in which each particle represents a possible state of the system in a conditioned form. A partition of the state is estimated by sampling, and the remaining partition is estimated analytically based on the sample value. Based on the structure of the SLAM problem, it is possible to provide an effective solution to SLAM by sampling from the space of the robot trajectories and by computing the maps in an analytic form, as showin in FastSLAM by Montemerlo et al.

My research aims to provide algorithms which are able to estimate accurate grid maps with low computational requirements. This can be achieved by expoiting the structure of the SLAM problem in designing the filter, via improved proposals and adaptive resampling as well as by re-using previous computation for estimating the next distribution of particles.

The same system has been adapted by Cyrill Stachniss et al. in building maps with a humanoid robot. Some open source code is available at OpenSLAM . Slawomir Grzonka investigated the use of look-ahead proposals in SLAM which allows to cope with limited sensor ranges.

Algorithms for Efficient Maximum Likelihood Mapping

A graph of poses is a natural formulation of the SLAM problem under known data associations. Each node of the graph represents a local map of the environment and a pose in the global frame. An edge between two nodes encodes a fictituous observation between two local maps which arises from the overlapping of common regions. With this formulation, solving SLAM may be regarded as finding the most likely configuration of nodes, under the given observations. This corresponds to find the spatial arrangmenents of local maps which results in a mostly consistent representations. To this end we extended a state of the art mapping by Edwin Olson based on Stochastic Gradient Descent to operate with efficient tree parameterizations and on-line . A 3D version of the algorithm has been recently presented and the related software is available on OpenSLAM , as usual.

Least Squares is a rather powerful tool and I have found myself using it to solve several tasks. To define a Least Squares problem one just needs to define error functions and the domain of the variables subject to estimation (and potentially the domain of the increments, if working on a manifold representation). Since the backbone of the system stays the same, together with Rainer Kuemmerle, Kurt Konolige, Hauke Strasdat and Wolfram I developed g2o: a flexible system for graph optimization. G2O has been proven to be useful to many people and a well known tool in the community that consitutes the backbone for several SLAM systems such as ORB-SLAM or DVO-SLAM.

Of course leaving out the data association problem results in an over simplification of the problem. This is the reason why I cooperated with Gian Diego Tipaldi in developing an efficient algorithm for estimating the marginal covariances of the nodes of a graph of poses by means of Covariance Intersection. With those covariances it it possible to restrict the search for data association to make the problem tractable.

If we are using a laser, the candidate associations can then be validated by one of the many effective scan-matching algorithms in the literature. I recommend you to have a look to the webpage of Andrea Censi . Within this context together with Jacopo Serafin, I developed a novel cost function for 3D point cloud registration that considers also the normals. This leads to increased basin of convergence and to a faster decrease of the error. Check out NICP, and its open source implementation.

Bastian Steder developed an effective algorithm which is able to compute pose graphs by using SURF visual features and relies on my optimizer to compute the MLE. The videos are impressive!!!!

Solution of Large Factor Graphs

Factor graphs for SLAM might be large, and solving them even with the most advanced techniques might take some time. Luckily SLAM has a strong spatial and temporal locality that enforces a limited connectivity between the variables and allows for local updates. Exploiting these ideas we can approach the problem in a hierarchical manner, by constructing a pyramid of graphs. HOG-MAN (Hierarchical Graph Optimization on a Manifold) is a system that exploits this consideration by performing selective updates around the neighborhood of the robot to preserve reasonable local accuracy, while it achieves global consistency by optimizing the overall layout of the graph represented at high level.

Condensed Measurements (CM) is a plugin for G2o that goes one level further, by generalizing the concept of hierarchical optimization with the introduction of abstract factors that represent constraints between local maps. These constraints approximate the statistical properties of the local map's solutions and allow for an effective optimization of the high levels in the hierarchy. As opposed to HOG-MAN, CM operates on any factor graph, including for instance bundle adjustment, and is not restricted to pose-graph optimization.

It is exploiting the condensed representation of the global map layout that, together with Mayte Lazaro we designed a multi-robot laser based SLAM system. Multi robot SLAM opens several challenges with respect to Single-Robot SLAM, namely: limited bandwidth for transmitting the data, scalability issues and more complex relocalization between agents. The system proposed by Mayte addresses these aspects by exploiting the compactness of the sparse graph to exchange map chunks, and it tackles the distributed optimization by exchanging compressed graph that connect the entry points of the rendez-vous positions.


Calibration is boring. It is perhaps this the reason why many real robotic systems come with a poor calibration. However a poor calibration usually makes the development of any application that is supposed to use the robot much harder: the sensor data are not correctly placed with respect to the robot frame, the speed commands do not produce the desired effects and so on. To tackle this task, together with Maurilio Di Cicco, we developed a non-parametric approach to remove the bias from for depth sensors such as Asus Xtion or Microsoft Kinect.

Having a calibrated depth sensor is nice, but not sufficient when you need to assemble a mobile platform that integrates several sensors and actuators. In this situation one should determine both the intrinsic parameters of each sensor and the kinematic parameters of the mobile base. With our tool for Unsupervised calibration, developed together with Maurilio di Cicco and Bartolomeo Della Corte, you can let the platform learn its optimal parameters.

Data Association

Binary Features are the fashion of the moment in computer vision: BRIEF, ORB and similar, all return a binary descriptor that captures the local surrounding of an image. When using these descriptors to approach tasks such as relocalization or object recognition, it is typical to compare a descriptor in a set with all those stored in a database and retrieve the closer one. This is a time consuming procedure, that can be speed up by using approaches such as bag-of-words in its excellent implementation by Dorian Galvez Lopez. With HBST, developed by Dominik Schlegel under my supervision we provide an alternative system for quickly retrieving similar binary descriptors based on a search tree that is dynamicallt adapted by analyzing the statistics of descriptor bits.

Uncertainty Driven Exploration

Using a SLAM algorithm for learning a map requires to manually steer a robot in the environment. However, the quality of the resulting map depends on the specific path taken by the robot. In particular, a mapping robot in a given point in time can be in one of these two situations: exploring new terrain or revisiting new regions. During the revisiting situations the uncertainty of a part of the map is reduced, while in the exploring situations new knowledge about known regions is acquired. The problem of exploring the environment consists in selecting the steering commands for a robot so that the map will be maximally consistent over time (in probabilistic terms). This can be seen as choosing the action which minimizes the entropy of the joint estimate of map and robot poses. Together with Cyrill Stachniss we designed an exploration based algorithm which is built on the top of an RBPF mapper.

Navigation for Flying Vehicles

In the context of the MuFly UNI-Freiburg has the task of building a mapping system for an autonomous micro helicopter. Together with Slawomir Grzonka and Bastian Steder we are addressing this challenging task by using custom designed range sensors and embedding state of the art algorithms on processors with limited computational capability. The visual-SLAM system developed by Bastian was a first successful attempt to have a working SLAM approach on this kind of vehicles. With Slawomir Grzonka, we are currently focusing on the use of low range laser sensors and IMU to build a quad-rotor which can autonomously navigate. These investigations have been rewarded at ICRA-09 with a best paper award. Click here for the paper see here for the paper.

Human Assisted Navigation Systems

After a long time I am using and designing navigation algorithms, I recognize that even the best navigation systems need a high level of experience to be used effectively. This because of several parameters and "magic numbers" which are present in many implementations. For instance, what most people do when running a SLAM algorithm is to acquire a log-file with a robot, and then trying to estimate a map. If the estimation fails they start changing the parameters based on the inspection of the wrong estimate. Whereas understanding these parameters is complex for non-experienced people, everyone can realize macroscopic mistakes, like wrongly closed loops or bended corridors. The aim of this research is to develop a system which is able to learn a consistent map based on high level user inputs and subsequently learn the relations between the user input and the parameters.

Long-life map learning

To have an autonomously moving robot, one typically starts by acquiring a map of the environment, and subsequently uses this map for localizing the robot and for planning trajectories. While, in principle it would be possible to run a SLAM algorithm rather than a localization one to estimate the pose, the latter is preferred. The reasons are mainly the reduced computational complexity of localization compared to SLAM and the increased robustness coming from a static map. However, the environment is populated by dynamic objects and it may change over time. Developing an algorithm which is able to detect those changes and to updates the map accordingly, without the need of manual intervention will allow to have a robot which is able to run for arbitrary long periods of time with minimal human supervision and with low computational requirements. Henrik Kretzschmar recently developed a node-reduction strategy for pose-graphs. His algorithm is able to suppress nodes of the graph whose observations bring only a little information to the map-building process. In this way, the complexity of the mapping depends on the size of the environment and not on the length of the trajectory taken by the robot.

Sometimes the batteries of the robot do not last enough to map a large environment. Performing the mapping in multiple sessions results in having a bunch of partial local maps that are usually affected by their own drift. Rigidly stitching these partial maps together would result in an ugly map not good for navigation. Together Taigo Bonanni we developed an approach to construct a single consistent global map out of these deformable bodies by exploiting the graph backbone structure of the individuals submap chunks.

Teaching SLAM

Research and Teaching at academic level are tightly connected: our students of today will be the researchers of tomorrow. Since most of my research orbits around the SLAM area, i try to make my life easier by investing in the design of tools and systems that aim at making the understanding of these subjects easier. To this extent, together with Dominik Schlegel we designed PRO-SLAM, a simple but performing stereo pipeline that, surprisingly, has rather decent results when compared to more complex systems that constitute the current (2017) state of the art. Of course, better learn mobile robotics it is recommended to have an own robot. To this extent a while ago, initially with Maurilio Di Cicco, then with Luca Iocchi we designed a mobile platform (MARRtino, from Master in ARtificial Intelligence and Robotics) that serves the purpose. The project was successful, but the board we used was hard to find, so we refactored the firmware for Arduino boards. The name of the firmware is Orazio.

Object Recognition and Model Learning

Having a robot which is able to autonomously move and learn maps of the environment is only a first step towards a device which can assist the humans in their everyday activities. To this end, a robot should be able to interact with everyday objects to accomplish specific non-trivial tasks. Accordingly it should be able to localize and distinguish known objects in the scene. Together with Bastian Steder we developed a robust algorithm for determining which objects are present in a 3D scene and where they are located based on range images. The models both models and scene can be given to the system as 3D point clouds. Subsequently, with Michael Ruhnke we designed an algorithm which able to learn object models from an unordered sequence of 3D scans by analyzing the recurrences of the partial views of objects in the scene.