Project Overview

GuideBeagle is a wearable computer that performs localization and mapping at indoor environments. The only sensor it relies on is a single camera, and it is intended to be the basis of applications such as Augmented Reality tools and games, visitor guides and surveillance.

What is it good for?

A system that provides localization at indoor environments, such as GuideBeagle, is a requirement for numerous applications, such as visitor guides (e.g. for museums or stores) and control systems for mobile robots (e.g. for surveillance or domestic services).

GuideBeagle can be used for applications that need less precise data, but it can also provide a high-precision localization, and is based on Computer Vision. This high-precision vision-based localization is compatible with the requirements for many Augmented Reality (AR) applications.

The renderer of an AR application must know the location from where each frame in a video stream was taken, to overlay it with precisely located artificial objects and present the augmented images to the user.

Apart from localization, the map created by GuideBeagle can be used in other ways. For example, it may be used by an AR system to determine what regions of the space are visible in a given moment. A system such as GuideBeagle can also facilitate the identification of scene objects, even dynamic ones.

GuideBeagle might be used in the future to build a “guide dog” system for visually impaired people, but this is not an initial concern.

What is it made of?

GuideBeagle will be built around a BeagleBoard running on batteries. It may be carried by the user as a wearable or a hand-held device computer. The peripherals connected to the board are a camera, which provides the essential input images analyzed by the system, and components used for user interfacing --- a touchable screen and audio devices, for example.
Programming will be performed initially with the Python language, and making use of libraries such as OpenCV or OpenMAX. On a later stage the most critical parts of the system shall be replaced by C/C++ implementations. The most heavy image processing parts will be constructed with code written for the DSP or making use of NEON technology. That will be done either in C and compiled with a proper compiler, or written directly in assembly language.

On the initial development of GuideBeagle there will be no requirements for using expensive peripherals. Initial work will be on developing image processing routines, and for that recorded videos can be used and results can be analyzed with the help of a desktop computer. On a later stage an inexpensive webcam can be attached to the USB port for further development of the real-time processing. At this stage user interface can be done with conventional displays and standard keyboards, mouses and joysticks. Later on the system can be enhanced with more sophisticated devices such as touch screens or even head-mounted displays and special cameras (e.g. with fish-eye lenses).

How does it work?

Basic localization --- When a map is available the device can tell the location of the user inside the room. For example, it can draw a map and a point representing the user. With that information the system might provide information about the place the user is in, or give instructions the user is supposed to follow. The basic capabilities of the system is therefore that of a GPS, except it works in a very different scenario.


Precise camera localization for AR ---  GuideBeagle can estimate the exact direction a camera was pointing when it captured a certain frame in a stream. This is a basic requirement for AR applications. It allows virtual objects to be rendered over the captured scene. Objects can also be identified, and if the map is rich enough, it is possible to simulate the occlusion of virtual objects by real objects. This is not always performed in the most simple AR applications, but can have a very significant positive impact on the user experience. One important detail is that because the localization is carried out from the same images being presented to the users, there is no need for calibrations, and very good alignments can be archived. The conventional use of sensors such as magnetometers and accelerometers do not offer the same quality.


Supervised map-building --- In the map-building process the system acquires new visual landmarks as the camera moves through the environment. Landmarks can eventually be integrated into higher-level structures such as the walls that limit the room, or large furnitures such as tables and shelves. There are multiple kinds of landmarks, that require different algorithms to be detected in the images, and when fully integrated in the map these landmarks can be used for localization and other tasks. GuideBeagle can also work as a supervised mapping system: the user can interfere in the mapping process, and the system can propose questions to the user to solve ambiguities and ease the mapping task. The system can also be tailored so the user can also provide information that is relevant only to specific applications.

Who is going to do it?

Nicolau Leal Werneck is a graduate student working in the Intelligent Techniques Laboratory at the Escola Politécnica da Universidade de São Paulo (USP), in São Paulo, Brazil. Monocular visual SLAM with maps such as used by GuideBeagle is the topic of his doctoral research.

More information

If you are interested in seeing more details of the technology that will be used to construct GuideBeagle, please follow the links regarding the Image Processing techniques to be applied, and also Mobile Robotics fundamentals.