The material of this webpage was published in 2010.
This webpage provides a simple C++ library that relies on OpenCV and can be used in real-time Augmented Reality projects. In short, it offers the ability to track black-white markers and provides the exterior orientation (pose) between tracked pattern and camera. Unlike other known toolkits/libraries, it works frame-wise (history is not used), so tracking relies on per-frame detection.
Note that the code is provided "as is" without any kind of warranty. The use and the redistribution of the code is only permitted for academic/research purposes. Please contact the author if you want to use it for other purposes.
Augmented Reality systems try to augment a (live) video by appropriately adding 3d objects or annotations in the scene. The primary requirement of such systems is the solution of the exterior orientation problem, a.k.a. pose estimation or extrinsic calibration. In other words, the transformation from the object coordinate system to the camera coordinate system must be estimated. The solution of such problems relies on 3D/2D correspondences. For a complete augmentation, intrinsic parameters must be also known, because any 3D virtual object must be further projected from the camera coordinate system to the image coordinate system (image plane). However, intrinsic parameters are fixed and invariant to camera/scene motion. Hence, they can be extracted offline.
Binary (black-white) patterns printed in planes are easier to be tracked and have been widely used as markers in Augmented Reality apps. The typical steps of such systems are the following:
Offline
1) Find intrinsic parameters (i.e. camera matrix)
2) Load one or more patterns that you want to track.
Online
1) Detect pattern's position (corners) in a captured frame
2) "Normalize" (warp) the detected pattern ROI and compare it with the loaded patterns
3) Once a pattern is identified, estimate the tranformation between the camera coordinate system (CS) and the pattern CS. This estimation relies on 2D/3D correspondences.
4) Use the above transformation matrix for further rendering/augmentation
In my demo, I skip the camera calibration step and I use default camera parameters as they are provided by OpenCV (camera parameters, distortion parameters). Strictly speaking, you should calibrate your camera before use it in an AR project. OpenCV provides functions to run calibration. However, default parameters are compatible with typical web cameras.
While one can use his own patterns or those provided by other libraries (e.g. ARToolkit), I make available my own patterns since they are used by my sample.
Download from HERE the patterns in various formats. Print the PDF files to use them for tracking (you also need a white area around the black one). Image files are loaded by the library as references for pattern identification. It is very easy to create your own patterns.
Download source code and sample (OpenCV 2.4.3)
You also need to download the camera and lens parameters provided above if you want to avoid the calibration step. Make sure that these parameters are compatible with your camera. If no, you need to calibrate your camera. HERE is an OpenCV Tutorial for camera calibration.
Watch also the following video that shows how one can augment input frames by detecting/recognizing a single marker.
Multi-Pattern Tracking
More than one pattern are considered here. Three patterns are loaded and eachone is identified by its id. What visually differentiates the identification is the color of drawn cube.
ID 1: magenta
ID 2: cyan
ID 3: yellow
Watch the following video that shows augmentation with multiple patterns
Contact
For any bugs, questions or help, please contact the author.
e-mail: -delete-george.evangelidis@-delete-gmail.com