We propose a GPU-based online patch learning algorithm for detection of a planar target object. The proposed algorithm learns the appearance of a planar region by computing warped patches seen from a set of viewpoints. To accelerate learning process, we replace patch warping with patch rendering on a mobile phone's GPU for fast warping and apply radial and Gaussian blurs to the rendered patch to reflect appearance differences caused by small pose disturbances.
Our learning algorithm automatically generates a frontal view of a patch based on an initial camera pose obtained by a built-in accelerometer of a mobile phone and thus, it does not require the frontal view of a patch, which is a common requirement of existing patch learning algorithms. In addition, our algorithm requires no prior knowledge and only the image of a planar region is required for learning. According to experiments, learning a patch takes only a few seconds on mobile phones. We expect our algorithm will be useful for mobile Augmented Reality (AR) tagging applications in the real world.
W. Lee, Y. Park, V. Lepetit, W. Woo, "In-Situ Video Tagging on Mobile Phones," Circuit Systems and Video Technology, IEEE Transactions on, Vol. 21, No. 10, Oct. 2011.
W. Lee, Y. Park, V. Lepetit, W. Woo, "Point-and-Shoot for Ubiquitous Tagging on Mobile Phones," International Symposium on Mixed and Augmented Reality, 2010.