Accurate position estimation is one of the major problems when working with mobile autonomous robots and a key component of them. Most of the times, there are three phases that comprise the movement sequence of a mobile robot (not only autonomous robots): localization, path planning, and path execution.
During localization, we have to choose a reference coordinate system. The position and orientation in the reference coordinate system is determined by using a great variety of sensors. When working with mobile robots, a path is updated from the current position to the goal if the current robot position is deviated from the pre-planned path. But when working with autonomous mobile robots, the path is generated at runtime. The movement sequence is modified every time the robot finds an obstacle in the course towards the goal.
Localization can also be divided into two types, absolute and relative localization. Absolute localization is based on landmarks, maps, satellite information and signals from external sensors and uses this set of information to determine the global position and orientation of the robot. In the other hand, relative localization is generally used during movement, because absolute localization methods are more time consuming.
Mobile robot localization and mapping is the process of simultaneously tracking the position of a mobile robot relative to its environment. Building a map of the environment, has been a central research topic in mobile robotics. Accurate localization, and having an accurate map is essential for good localization. Therefore, simultaneous localization and map building (SLAM) is a critical underlying factor for successful mobile robot navigation in a large environment, irrespective of what the high-level goals or applications are.
To achieve SLAM, there are different types of sensor modalities such as sonar, laser range finders and vision. Sonar is fast and cheap but usually very crude, whereas a laser scanning system is active, accurate but slow. Vision systems are passive and of high resolution. Many early successful approaches (Borenstein et al. 1996) utilize artificial landmarks, such as bar-code reflectors, ultrasonic beacons, visual patterns, etc., and therefore do not function properly in beacon-free environments. Therefore, vision-based approaches using stable natural landmarks in unmodified environments are highly desirable for a wide range of applications. The map built from these natural landmarks will serve as the basis for performing high-level tasks such as mobile robot navigation.
Author: David Alejandro Trejo Pizzo. IEEE Member and researcher @ AIGROUP, working in the FIC Project. Student @ Universidad de Palermo.
References:
Sooyong, L. & Jae-Bok, S. Mobile robot localization using optical flow sensors. International Journal of Control, Automation and Systems, vol. 2, no.4, pp. 485-493, December 2004
Se, S.; Lowe, D. & Little, J. Mobile robot localization and mapping with uncertainty using Scale-Invariant Visual Landmarks