Wearable vision for Recognition and Semantic Maps

Semantic labeling of indoor places

Omnidirectional vision systems are of particular interest because they allow us to have more compact and efficient representation of the environment. Our approach for omnidirectional vision based scene labeling, for augmented indoor topological mapping, includes novel ideas in order to increase the semantic information of a typical indoor topological map: we pay special attention to the semantic labels of the different types of transitions between places, and propose a simple way to include this semantic information as part of the criteria to segment the environment into topological regions. This work is built on efficient catadioptric image representation based on the Gist descriptor, which is used to classify the acquired views into types of indoor regions. The basic types of indoor regions considered are Place and Transition, further divided into more specific subclasses, e.g., Transition into door, stairs and elevator. Besides using the result of this labeling, the proposed mapping approach includes a probabilistic model to account for spatio-temporal consistency. All the proposed ideas have been evaluated in a new indoor dataset also presented in this paper, captured with our wearable catadioptric system.

Personal assistance for navigation

The prototype is used to present our approach for people localization and guidance. Typical hierarchical localization approaches from mobile robotics are adapted to human-centered applications using wearable sensors. A distinctive characteristic of our approach is to exploit omnidirectional vision, and the main steps consist of a visual odometry run under a topological and semantic localization process.