Our team has created a real-time hand recognition system capable of controlling a car stereo based on user hand poses. With this system, the driver can change the song, change the volume, and much more, without taking their eyes off the road!
When our project is complete, the system should be capable of successfully identifying 4 different hand poses (refer to hand gesture library sheet). This project is using machine learning to interpret RGB depth data to differentiate between various hand poses.
The first phase of our project was to begin using the AIRY3D camera to create an RGB image based on depth data. The fist phase of this project focuses on the implementation of machine learning to track a users hand, and begin identifying simple gestures. The first phase also includes the creation of a simple user interface.
The second phase of this project focuses more heavily on recognition of additional hand gestures and improving the machine learning to increase the accuracy of the program. This phase also includes implementing feedback from user tests to improve the user interface.
Our team is using a combination of various software and hardware for the creation of our system. The software and hardware used is listed below:
Python
Tensorflow & Keras
Open CV
AIRY3D DEPTHIQ 16Mp camera
Arduino kit
ATmega2560 Microcontroller
SD Card Module
Speaker
Before the system begins processing a hand pose, the system first determines whether the users hand is in the correct range for sensing. If the users hand is too close, the user interface will write to the user to move their hand farther away, along with a distinct noise to signify that the hand is not in range for recognition. Similarly, if the hand is too far from the camera, the system will output a message and a sound. The full depth range of the camera is 0 to 250 centimeters. However, the ideal range for detection is 30 to 80 centimeters. Therefore, the user will receive an alert if their hand is less than 30 centimeters or more than 80 centimeters from the camera. By ensuring the users hand is in range, the hand pose identification will be much more accurate. Different common hand poses will be linked with a command for a radio. Once a pose has been identified the software will make the issue the corresponding command to the radio unit.
Nicholas is a Carleton University student studying Optical Systems and Sensors. Nicholas is our machine learning lead.
Sarah is currently a fourth year Optical Systems and Sensors student at Carleton University. Sarah is our user interface and communications director.
Rohan is currently a student at Carleton University studying Optical Systems and Sensors. Rohan is our head of hand recognition. Outside of school, Rohan works as a Junior Systems Engineer.
Heather is currently a student at Carleton University studying Optical Systems and Sensors. Heather is our embedded systems lead.
At the current time we are looking to including the following functionality and features to this project:
Improve Pose Recognition Reliability
Implement Depth to activate Pose Recognition
Incorporate our full Library of Gestures
nickfitzgerald@cmail.carleton.ca
sarahmonforton@cmail.carleton.ca
rohanchopra@cmail.carleton.ca
heatherreese@cmail.carleton.ca