MOOV3D Project



Introduction

The objective of MOOV3D is to build a prototype mobile platform equipped with a stereoscopic camera, and several different types of  3D display output: an auto-stereoscopic screen on the mobile device, 3D glasses, or 3D high-definition television connected to the mobile. 
    In collaboration with the other industrial partners, applications of Augmented Reality and 3D navigation are developed: using the 3D output of the platform, image analysis technique are used to precisely locate objects in space environment, insert and display synthetic objects in relief  in this scene. The 3D image quality will be studied in relation to the modelling of human visual perception of depth. The reaction of users will be evaluated during the project and guide its direction.

    Partners of the project:

    Real-Time Stereo Tracker for Augmented Reality applications

    The aim is to develop a real time stereo tracker that allows augmented reality application on the mobile prototype, possibly with a stereo output, so that it can be seen with an auto-stereoscopic screen or with 3D glasses.

    The tracker is based on the stereo images provided by the two camera on board of the mobile platform. It reconstruct matching points from the stereo pairs and then track them over the next frames. At each frame the correspondences 2D-3D are used to estimate the position of the camera. 

    In the current implementation, a dominant plane is found fitting most of the 3D points and then the 3D model (a simple skeleton of a cube) is drawn on it. Of course the long term aim is to use more realistic rendering engines (eg OpenGL, Unity3D etc.) to render the 3D content. 

    The tracker is implemented on Android 2.3, using OpenCand JNI.

    Here are some work in progress results


    Stereo Tracker using a chessboard

    Stereo Tracker with a chessboard

     
    In this (debugging) version of the tracker, the plane fitting is replaced by the detection of a chessboard in order to have some reliable points lying on a plane. The chessboard detection is implemented by OpenCV as it is usually used for calibration purposes.
    Once the chessboard is detected on both images, the corner points are reconstructed and the 3D plane containing them is found.

    The (skeleton of a) cube is then drawn on both images as if it were on the plane. The initial position of the cube is (arbitrarily) given by the intersection point of the plane and the optical axis of the camera.

    The corner points are then tracked in the next frames using the Lucas-Kanade tracker (implemented in OpenCV) and the new position of the mobile device is estimate using the 2D-3D correspondences. 

    This video has been made for debugging purposes using the video available at CAVA project. In this case, the code was running on a PC but it can be easily ported on Android through JNI (see next videos)



    Stereo Tracking using natural features

     

    Stereo Tracking using natural features

     
    In this (still debugging) version of the tracker, the chessboard detection has been removed. The plane on which we display the artifact is detected by determining the dominant plane among the 3D points reconstructed from the stereo pair.

    In this eased example, the points are detected mostly on the whiteboard and the plane is detected accordingly. 

    The video is taken with the mobile platform but, again, processed off-line on a PC. (taking a video of the screen of the mobile phone is a real hassle! :-)



    Stereo Tracker using natural features 2

     
     
    Another video showing the tracking result using another planar surface.

    The tracking seems already quite robust but (as expected) there are some drifting in the estimation of the position that makes the cube move on the plane.

    There is still some a lot of work to do :-)


    Stereo Tracker on the platform

     

    Stereo tracker on the platform

     
    This video shows how the tracker works on the prototype. Due to the limited computational resources available the frame rate is much lower than the PC but anyway the processing is still smooth.

    The beauty is that the same code is running indifferently on a PC and on a mobile device powered by Android and an ARM processor.

    I hope nobody gets seasick watching the video :-)

     

    With natural features

     
    (30//11/2012) This video show one of the latest development on the platform. The code has been finally been optimized in order to speed up the execution (no GPU optimization, though!). Now we can reach more than 15fps and maintain a fairly stable tracking of the camera position.

    This is still a working in progress results. as the Bundle adjustment has not been put in the loop yet. But the results seem already promising!


     

    Stress test

     
    (30/11/2012)  This is just a STRESS TEST of the current version (same as last video) where the tracker shows to be quite robust even to large and quick movements (up to some limit, of course).


     

    OpenGL rendering with Wavefront OBJs

     
    (13/03/2013)  Just testing the rendering of OBJ file (wavefront format) with OpenGL using the tracking information provided by the library.


    Library integration in an AR applicaiton

     
    (24/04/2013)  The library is being integrated in a real Augmented Reality application at PointCube


    Free blog counters
    Comments