Virtual Mirror Rendering

Mirrors are indispensable objects in our lives. The capability of simulating a mirror on a computer display, augmented with virtual scenes and objects, opens the door to many interesting and useful applications from fashion design to medical interventions. Realistic simulation of a mirror is challenging as it requires accurate viewpoint tracking and rendering, wide-angle viewing of the environment, as well as real-time performance to provide immediate visual feedback. In this work, we propose a virtual mirror rendering system using a network of commodity structured-light RGB-D cameras. The depth information provided by the RGB-D cameras can be used to track the viewpoint and render the scene from different prospectives. Missing and erroneous depth measurements are common problems with structured-light cameras. A novel depth denoising and completion algorithm is proposed in which the noise removal and interpolation procedures are guided by the foreground/background label at each pixel. The foreground/background label is estimated using a probabilistic graphical model that takes into account color, depth, background modeling, depth noise modeling, and spatial constraints. The wide viewing angle of the mirror system is realized by combining the dynamic scene, captured by the static camera network with a 3D background model created off-line, using a color-depth sequence captured by a movable RGB-D camera. To ensure a real-time response, a scalable client-and-server architecture is used with the 3D point cloud processing, the viewpoint estimate, and the mirror image rendering are all done on the client side. The mirror image and the viewpoint estimate are then sent to the server for final mirror view synthesis and viewpoint refinement. Experimental results are presented to demonstrate the accuracy and effectiveness of each component and the entire system.

Selected Publications: