Comparing the Performance of 2 Markerless Deep Learning Based Motion Capture Systems Software for Human Pose Estimation
Student: Trijana Sengadu Ashok Kumar
Mentors: Dr. Christopher Buneo – SBHSE
Dr. Stephen Helms Tillery – SBHSE
Dr. Paul VanGilder – SBHSE
YouTube Link: View the video link below before joining the zoom meeting
Zoom link: https://asu.zoom.us/j/84648378985
Time: 10am – 2pm
Abstract
Motion capture data has become increasingly ubiquitous, hence there is a need to effectively manage and analyze this data. Artificial neural networks can help in analyzing this information by processing large amounts of data. For example, deep learning-based neural networks have recently been used for pose estimation, which is important for tracking human activity and movement as well as for robotics and augmented reality applications. Automation is important objective that can be achieved using such approaches. DeepLabCut is an open-source software toolkit that uses deep learning for 3D pose estimation without requiring markers. DeepPoseKit is another software toolkit that uses fast- GPUs to automatically estimate locations of animal’s body parts directly from images or videos. For this project, we will compare DeepLabCut and DeepPoseKit to see which is more efficient for analyzing animal or human movement to model their behavior. The parameters like inference speed, accuracy, and the ease of usage of the toolkits were used for the comparison. For the evaluation, original data was captured and analyzed. The movement used was repetitive hand movement. This was further divided into two subcategories (1) opening and closing of hand (2) grasping and squeezing an object. The image frames from the videos were annotated using DeepLabCut’s Graphical User Interface (GUI). The annotated data was then trained and analyzed using both the toolkits in Google Collaboratory. This study expects to find which of these two toolkits is recommended for Biomedical research.