Features
Based on KinectFusion Algorithm[1]
Pure GPU implementation ( Volume size up to 512 to the cube )
Color Model Reconstruction
Real-time model construction and rendering.
Depth input filtering
Truncated Signed Distance Field (TSDF) model representation
Fast ICP[2] algorithm on GPU for surface alignment
Raycast algorithm for model visualization
Realtime user interaction with system
Phong shading
System Work-Flow
Step 1:
After each new raw RGB-Depth map has been received from the Kinect sensor, the data is filtered to get rid of noise, then a normal-depth map is generated. At the same time, another normal-depth map is extracted from the TSDF volume for mapping.
Step 2:
Given two RGB-Depth maps from Step1, the fast ICP algorithm computes the rigid transform matrix to align these normal-depth maps. Also by continuously multiplying this matrix with previous one, the system can keep track of the Kinect pose.
Step 3:
Since one RGB-Depth map is extracted from the TSDF, which represents the global model, then with the new RGB-Depth map along with the matrix from Step2, the system can correctly update both the TSDF volume and the Color volume by incorporating the new measured surface.
Step 4:
After model updating, a ray-casting algorithm will sample both the TSDF and the Color volume to generate a Phong shaded image of the target model from the Kinect’s point of view
Implementation aspects
Visual Studio 2013
DirectX 11 API
DXUT framework
HLSL with C++
Vertex Shader, Geometry Shader, Pixel Shader used for GPU implementation
[1] R. A. Newcombe, A. J. Davison, S. Izadi, P. Kohli, O. Hilliges, J. Shotton, D. Molyneaux, S. Hodges, D. Kim, and A. Fitzgibbon, “KinectFusion: Real-time dense surface mapping and tracking," 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 127{136, Oct. 2011.
[2] Y. Chen and G. Medioni, “Object modeling by registration of multiple range images,” Image and Vision Computing, 10, 3 (April 1992) pp. 145-155.