We believe that identifying the human pose to determine whether a person is fallen is crucial. Human 2D pose estimation—the problem of localizing anatomical key-points or “parts”—has largely focused on finding body parts of individuals. Compared to object recognition, machines need to gain a more in-depth understanding of objects in images using pose estimation technique, since machines should describe detailed movement of objects.
The corpus of work is significant in estimating the human pose and recent advances in deep learning enable new possibilities in estimating the human pose. In recent years, deep learning became more and more popular. Most of modern human pose estimators rely on deep learning.
We implement an approach, which is referred to as Part Affinity Fields (PAFs) to efficiently detect the 2D pose of multiple people in an image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy. The Raspberry Pi will be utilized for image capture and data processing. Following figures show the image processing. Please refer Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields for more details.
However, the CMU's original model is too large to fit most of common devices at present.
The right figure shows the computation has exceeded the available memory.