Depth Camera 

Hand Gesture Recognition 

Group 1 Capstone Project

Project Idea

Your attention is crucial while driving safely, using the radio can take away a split second of that attention and may result in an accident. Controlling the radio with a computer vision system can keep your focus on the road with a simple hand gesture and an audible signal will indicate that your request was recognized.

Using the Intel D435 digital depth camera we capture a hand gesture using the depth stream, recognize the gesture and output a signal to control a radio and send an audible acknowledgment tone. For this project, a media player will not be controlled but we will display on the screen the action taken as proof of concept.


Intel Depth Camera:

We decided to use the Intel D435 depth camera as it uses stereo vision to calculate depth. The stereo vision implementation consists of a left imager, a right imager, and an infrared projector. The infrared projector projects a non-visible static IR pattern to improve depth accuracy in scenes with low texture.

The left and right imagers capture the scene and send imager data to the depth imaging (vision) processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via the shift between a point on the Left image and the Right image. The depth pixel values are processed to generate a depth frame.

 Software and Hardware

Hardware:

Platform and Libraries:

Hand Gestures:

Meet The Team

Adam Thompson:

Project Leader, Research & Development Director, Communications Officer

Email: adamjthompson@cmail.carleton.ca

Philippe Beaulieu:

Chief Executive Developer, Programming Lead, Director of 3D Recognition

Email: philippebeaulieu@cmail.carleton.ca