Engineering
Classifying Hand Gestures Using a Low-Scan-Line Ultrasound Probe and a Temporal Convolutional Network
Ephram Cukier
Engineering
Ephram Cukier
Each year, there are more than one million amputations worldwide. However, there are currently no accurate and compact prosthetics available to patients to restore functionality to the amputated regions. Most research regarding the construction of a smart prosthetic for the rehabilitation of an upper limb amputation has revolved around the use of analyzing data from surface electromyographic (sEMG) sensors, a sensor that can detect the electrical signals that activate muscles. Although in recent years there has been researching regarding the use of ultrasound sensors (US) to identify hand gestures, prototypes have either been bulky; power inefficient, reducing the battery life of the device; or inaccurate. Standard US probes contain 256 scan lines, causing the device to be large and bulky. Using a modified probe with only 4 scan lines allows the device to remain significantly more compact, at the cost of information. A Temporal Convolutional Network (TCN) is functions similar to a Convolutional Neural Network (CNN) in that it can process and extract information from a 2D image. The difference is that a TCN has the added dimension of time, and takes multiple buffered images when processing data. This allows it to learn and observe information about motion in the images, which is valuable when attempting to map the motion of muscles. In this work, I create a TCN based system to use temporal information to substitute for the lost information from the missing scan lines. With this approach, I aim to achieve higher accuracies than the currently accepted CNN algorithms in the field. Further, I hope to develop a set of standards that can be used to more efficiently collect data, so results can be achieved on a user by user basis without a significant time investment.