Vision based topological localization and mapping for autonomous robotic systems has been a subject of increased interest in recent years. The need for mapping larger environments and describing them at different levels of abstraction requires alternatives to purely metric representations of the environment and additional ability to deal with large amounts of data efficiently. Most of the successful approaches for appearance based localization and mapping, with large datasets, are typically based on local image features. In our works we propose to use the global gist descriptor for appearance based mapping, place recognition and localization using omnidirectional images. We show how to represent an omnidirectional image using gist descriptor, present different gist similarity measures and proposals to solve the mentioned tasks. We demonstrate the effectiveness of the proposed representation on extensive experiments with 360º field of view panoramas and catadioptric images, comparing them with local feature based approaches.
Researchers: A.C. Murillo, J. Kosecka, G. Sing
Projects: DPI2009-14664-C02-01, DPI 2009-08126
Related Publications: Localization in Urban Environments using a Panoramic Gist Descriptor.
Experiments in Place Recognition using Gist Panoramas.
Gist vocabularies in omnidirectional images for appearance based mapping and localization.
Average view in each of the gist-vocabulary clusters built from 1000 reference panoramas (4000 views)
We have designed and evaluated a hierarchical vision-based global localization process towards efficient vision based global localization. Different stages compose a hierarchy where the reference image set (that serves as a topological map) is pruned in consecutive steps. The initial steps in the process are equivalent to image retrieval in big data sets, giving as a result the topological localization of the mobile device. The final steps provide a metric localization of the current view with regard to the reference image set. This process could be applied for global/initial localization purposes as well as loop closing tasks.
We have studied and evaluated different image features for robot localization task: improvements in line matching have been done, increasing their usefulness for robot localization. These features have been evaluated for robot topological and metric global localization, together with the SURF features and the state of the art in wide-baseline image matching.
Researchers: A.C. Murillo, J.J. Guerrero, C. Sagüés.
Project: MCYT/FEDER - DPI2003 07986, DPI2006-07928, IST- 1-045062-URUS-STP
Related Publications:
From Omnidirectional Images to Hierarchical Localization
SURF features for efficient robot localization with omnidirectional images
Topological and metric robot localization through computer vision techniques