A wearable SLAM system for real-time geolocalization has been developed as part of the ANR project MALIN. The aim is to evaluate the performance of different localization technologies for accurate real-time geolocalization under highly dynamic scenarios. A fused Laser-Visual-Inertial (LVI) system was created for performing robust and accurate localization. The method consist on communicating optimization events (such as loop closures) between VI-SLAM and LiDAR-SLAM, which are executed simultaneously. The calibration between sensors was also obtained by aligning trajectories that are estimated by each sensor. The transformation between them is correlated to the extrinsics between the sensors.
Figure 1. Top left: Fusion schema for performing real-time geolocalization. The optimization events of LI-SLAM and VI-SLAM are communicated. Bottom Left: Performance of We-Co SLAM System for 3D Mapping. Top right: WeCo system and 3D LOD reconstruction of visited building. Bottom right: Estimation of GPS trajectory.
3D Reconstruction at 3 LOD (level of details) is performed. Application of 3D surface reconstruction processes over noisy 3D pointclouds by employing geometric-based approaches in CGAL library. We have proposed a 3D indoor reconstruction of visited buildings at three different LOD. Permanent structures as walls, ceilings and floors are detected by classification methods. For LOD0, walls are projected onto the main floor for generating a floorplan. In LOD1, a 3D flat surface is obtained by optimizing over the intersection of detected planes (Kinetic Shape Reconstruction algorithm). Finally for LOD2, a Poisson reconstruction is performed for adding details of the scene such as furniture, windows, etc.
The MALIN project had very challenging scenarios, such as smoked rooms, mirrors, repetitive patterns and even crawling for the agent. Therefore, the evaluation of each SLAM on each scenario was observed and analysed for weighting the contribution of each technology for the estimation of the pose. Three competitions were organized to test and evaluate the wearable system. At each competition, the system was upgraded.
TEAM: Localization Orientation and 3D CArtography (LOCA-3D)
The goal of this challenge is to develop and experiment accurate location solutions for emergency intervention officers and security forces. These solutions must be efficient inside buildings and more generally in conditions where satellite positioning systems do not work satisfactorily. The objective is to improve competences in autonomous location solutions (trajectography) in disturbed environment. Secondary functions such as agent orientation and 3D cartography (3D mapping) of visited areas are also expected.
Our LOCA-3D Project (Location, Orientation and 3D CArtography) aim is to overcome this lack of knowledge in technical and technological aspects of indoor location while respecting the challenge constraints. Our solution based on combination of several sensors (inertial and optical) allows the primary locating function and the secondary functions (orientation and 3D mapping) to be executed.
The concept is based on an advanced inertial system allowing the agent trajectory to be calculated. A part of the inertial sensor drift is compensated by the vision system which simultaneously generates scatter plots. The trajectory reconstitution of the moving system allows these scatter plots to be referenced in a global system. Transversely, the solution uses robust methods of 3D reconstruction from scatter plots and measurement noise filtering in order to keep the coherent part of the information. The 3D mapping will be executed by off-line and progressive reconstruction. The proposed consortium is complementary. Each partner will bring a strong added value to the development of the global solution. [1]