Research

Joint Estimation of Camera Orientation and Vanishing Points from an Image Sequence in a Non-Manhattan World


Abstract

A widely used approach for estimating camera orientation is to use the points at infinity, i.e., the vanishing points (VPs). Enforcement of the orthogonal constraint between the VPs, known as the Manhattan world constraint, enables an estimation of the drift-free camera orientation to be achieved. However, in practical applications, this approach is neither effective (because of noisy parallel line segments) nor performable in non-Manhattan world scenes. To overcome these limitations, we propose a novel method that jointly estimates the VPs and camera orientation based on sequential Bayesian filtering. The proposed method does not require the Manhattan world assumption, and can perform a highly accurate estimation of camera orientation. In order to enhance the robustness of the joint estimation, we propose a keyframe-based feature management technique that removes false positives from parallel line clusters and detects new parallel line sets using geometric properties such as the orthogonality and rotational dependence for a VP, a line, and the camera rotation. In addition, we propose a 3-line camera rotation estimation method that does not require the Manhattan world assumption. The 3-line method is applied to the RANSAC-based outlier rejection technique to eliminate outlier measurements; therefore, the proposed method achieves accurate and robust estimation of the camera orientation and VPs in general scenes with non-orthogonal parallel lines. We demonstrate the superiority of the proposed method by conducting an extensive evaluation using synthetic and real datasets and by comparison with other state-of-the-art methods.


Demo Video


Source Code


Publication

  • Jeong-Kyun Lee and Kuk-Jin Yoon, "Joint Estimation of Camera Orientation and Vanishing Points from an Image Sequence in a Non-Manhattan World", International Journal of Computer Vision (IJCV), vol. 127, no. 10, pp. 1426-1442, 2019.
  • Jeong-Kyun Lee and Kuk-Jin Yoon, "Real-time Joint Estimation of Camera Orientation and Vanishing Points", IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 2015.

Temporally Consistent Road Surface Profile Estimation Using Stereo Vision


Abstract

Road surface profile (RSP) estimation is an important task to find imperfections of a road surface, thereby improving ride quality. The RSP estimation has been recently studied using stereo vision owing to its affordable price. However, the existing methods provide noisy and temporally unstable results for real-world driving scenes because of noisy range measurements, noisy pitch angle estimate between the camera and the road surface, and interference from obstacles. This paper proposes a novel method for temporally consistent and robust RSP estimation to overcome these problems. The proposed method consists of three steps: free space estimation, digital elevation map (DEM) estimation, and RSP estimation. We first estimate a drivable area, i.e., free space. For robust and fast free space estimation, we propose an optimization-based non-parametric road surface modeling method and an integral disparity histogram-based free space estimation method. Then, the DEM of the road surface is estimated using range measurements on the free space to avoid obstacle interference. The DEM is temporally updated every frame using the moving average filter and the DEM reference grid update scheme. Owing to these strategies, the proposed method reduces the elevation estimation noise and the pitch angle error, and therefore provides the temporally consistent RSP. We demonstrate the superiority of the proposed method experimentally using stereo image sequences captured in real-world driving scenes.


Demo Video


Dataset


Publication

  • Jeong-Kyun Lee and Kuk-Jin Yoon, "Temporally Consistent Road Surface Profile Estimation Using Stereo Vision", IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 5, pp. 1618-1628, May 2018.

Joint Layout Estimation and Global Multi-View Registration for Indoor Reconstruction


Abstract

In this paper, we propose a novel method to jointly solve scene layout estimation and global registration problems for accurate indoor 3D reconstruction. Given a sequence of range data, we first build a set of scene fragments using KinectFusion and register them through pose graph optimization. Afterwards, we alternate between layout estimation and layout-based global registration processes in iterative fashion to complement each other. We extract the scene layout through hierarchical agglomerative clustering and energy-based multi-model fitting in consideration of noisy measurements. Having the estimated scene layout in one hand, we register all the range data through the global iterative closest point algorithm where the positions of 3D points that belong to the layout such as walls and a ceiling are constrained to be close to the layout. We experimentally verify the proposed method with the publicly available synthetic and real-world datasets in both quantitative and qualitative ways.


Experimental Results

  • Comparison of the proposed method (right) with the state-of-the-art method [S. Choi et al., CVPR, 2015] (left). With the aid of the scene layout, the proposed method preserves important structures of the scene such as walls and a floor.

Publication

  • Jeong-Kyun Lee, Jae-Won Yea, Min-Gyu Park, and Kuk-Jin Yoon, "Joint Layout Estimation and Global Multi-View Registration for Indoor Reconstruction", IEEE International Conference on Computer Vision (ICCV), Venice, 2017.

Three-Point Direct Stereo Visual Odometry


Abstract

Stereo visual odometry estimates the ego-motion of a stereo camera given an image sequence. Previous methods generally estimate the ego-motion using a set of inlier features while filtering out outlier features. However, since the perfect classification of inlier and outlier features is practically impossible, the motion estimate is often contaminated by erroneous inliers. In this paper, we propose a novel three-point direct method for stereo visual odometry, which is more accurate and robust to outliers. To improve both accuracy and robustness, we consider two key points: sampling a minimum number of features, i.e., 3 points, and minimizing photometric errors in order to maximally reduce measurement errors. In addition, we utilize temporal information of features, i.e., feature tracks. Local features are updated by the feature tracks and the updated feature points improve the performance of the proposed pose estimation. We compare the proposed method with other state-of-the-art methods and demonstrate the superiority of the proposed method through experiments on the KITTI benchmark.


Demo Video


Publication

  • Jeong-Kyun Lee and Kuk-Jin Yoon, "Three-Point Direct Stereo Visual Odometry", British Machine Vision Conference (BMVC), York, 2016.