The 1D trifocal tensor is the three view geometric constraint for the case of 1D data. This kind of data can be obtained intuitively from images. For example, using just 1D bearing information seems very convenient when working with omnidirectional images without needing to unwarp them, taking advantage also of the higher accuracy in the angle of the features in these images than in the distance to the image projection center. We show how to estimate the 1D radial trifocal tensor and apply it for robust feature matching and robot localization.
Besides, a proposal to estimate the 1D trifocal tensor taking advantage of dominant planes in the scene has been done: it is possible to obtain the homography corresponding to a plane dominant in the scene and include its constraints in the 1D tensor estimation, obtaining as a result a more efficient estimation process and more accuracy in the reconstruction of the plane information.
Researchers: A.C. Murillo, J.J. Guerrero, C. Sagüés.
Project: MCYT/FEDER - DPI2003 07986
Related Publications: Localization with Omnidirectional Images using the Radial Trifocal Tensor.
Robot and Landmark Localization using Scene Planes and the 1D Trifocal Tensor.
Localization and Matching using the Planar Trifocal Tensor with Bearing-only Data.
We evaluate different methods to estimate the Fundamental Matrix (two - view geometry constraint) from two homographies extracted from automatic line correspondences. Each homography represents one of the dominant planes in the image, then we can segment the the lines that belong to each of them.
We develop a filter to identify if the robust process estimation has been successful or not (either because there was a failure in the automatic homography estimation or because the scene in the image is planar or there is only rotation between the two views).
If the two homographies are correct, the intersection between the two planes represented by them should correspond with the intersection of the main planes in the image. We see that the filter properly reject those cases when the automatic estimation of the homographies and their corresponding planes failed.
Robust Line Matches in the two dominant planes of the scene, after the Robust computation of two homographies
Left: Intersection lines obtained executing 100 times the automatic estimation of the two dominant planes in the scene.
Right: Same executions from the left, but using the filter to reject those wrong estimations.
Researchers: A.C. Murillo, J.J. Guerrero, C. Sagüés.
Project: MCYT/FEDER - DPI2003 07986
Related Publications: From lines to epipoles through planes in two views.