RGB-D segmentation

First, we need RGB-D data acquired by Kinect. There are several of them:

Kinect RGB-D point cloud data set

RGB-D SLAM Dataset and Benchmark: http://vision.in.tum.de/data/datasets/rgbd-dataset

B3DO: Berkeley 3-D Object Dataset: http://kinectdata.com/

Tombone's blog: http://quantombone.blogspot.com/2011/10/kinect-object-datasets-berkeleys-b3do.html

The Stanford 3D Scanning Repository: http://graphics.stanford.edu/data/3Dscanrep/

NYU: http://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html

UW: http://www.cs.washington.edu/rgbd-dataset/

We first make a simple test on UW data set as it provides small labeled data set.

Quick-and-dirty RGB-D point cloud segmentation

training on 7 samples/points taking 0.001676 seconds

test on 7482 samples taking 0.012577 seconds with 93% accuracy.

The feature vector: RGB and depth only! So, there are still much more opportunity for richer features out there!

The resulting segmentation using the RGB-D as the features.

The results shown in 3D space

the MATLAB code is available here on this url.