Spherical panoramic image, taken by spherical cameras such as Point Grey Ladybug or RICOH THETA, is now widely used as the more efficient image format capturing the 360° visual appearance around the camera instead of standard perspective image. However, the appearance of the object on the image is largely distorted due to equirectangular projection. It crucially affects the results of feature matching because the popular local feature detectors and descriptors are designed up to affine invariance.
We propose a simple yet effective method for improving performance of local feature matching among equirectangular cylindrical images. The key idea is to exiplictly generate synthesized images by rotating the spherical panoramic images. Keypoint detector and feature descriptor are applied only to the less distroted area in the synthesized panoramic images. Our framework clearly improves feature matching performance between spherical panoramic images. Furthermore, our framework can be combined with any other recent features.
Experimental result of feature matching among spherical panoramic images. Left (top): Evaluation for pure camera rotation. Graph shows the matching precision on y-axis for each rotation angle on x-axis. Right (bottom): Evaluation for camera random translation. The boxplot shows the statistics (y-axis) of the fraction of matching precision for each method on x-axis. Proposed method outperforms state-of-the art method [3] which is also designed for spherical camera.
Feature matching between 2 panoramic images with camera rotation obtained by SIFT (left) and SIFT with proposed method (right). In both cases, correct and incorrect matches w.r.t. the ground truth correspondences are shown in green and red dots, respectively. The proposed feature matching has 1226 correct matches which are more than twice of that obtained by the standard SIFT (535 matches).
Hajime Taira, Yuki Inoue, Akihiko Torii and Masatoshi Okutomi, Robust Feature Matching for Distorted Projection by Spherical Cameras, IPSJ Transactionas on Computer Vision and Applications, Vol.7, pp.84-88, July, 2015 [PDF]
[1] Lowe, David G. "Object recognition from local scale-invariant features." Computer vision, 1999. The proceedings of the seventh IEEE international conference on. Vol. 2. Ieee, 1999.
[2] Mikolajczyk, Krystian, and Cordelia Schmid. "Scale & affine invariant interest point detectors." International journal of computer vision 60.1 (2004): 63-86.
[3] Cruz-Mota, Javier, et al. "Scale invariant feature transform on the sphere: Theory and applications." International journal of computer vision 98.2 (2012): 217-241.