System Overview
The prosthetic hand is designed to interpret muscle signals from the forearm and translate them into physical motion. It uses EMG sensors to detect these signals, a microcontroller to process the data, a machine learning model to determine the intended grip type, and a servo motor to rotate the thumb into the correct position for grasping various objects.
Signal Data Command Movement
Figure 1. The flow of signals and information between each primary system module.
Hardware Breakdown
The Myoware EMG is designed to detect and collect data from muscle activity. It works by sensing the natural electrical signals produced by the nervous system when a person activates specific muscle groups.
The microcontroller acts as the central control unit for the prosthetic hand. It takes the incoming data from the EMG sensor and converts it into a format that the rest of the system can understand.
The machine learning model serves as the decision-making layer of the system. It analyzes the processed EMG data to identify patterns that correspond to different hand movements.
The servo motor is responsible for physically repositioning the thumb to enable different types of grips. It rotates the thumb to a certain position based on commands from the microcontroller.
Figure 2. 3D model design depicting the bottom layer of our prosthetic hand design. Shown are channels for the residual limb, finger connections, and tracks for the thumb to rotate along both mechanically and electrically.
The prosthetic hand is entirely 3D printed, allowing for a lightweight, low-cost, and customizable design. It includes mechanical joints that replicate natural finger movement, with a special emphasis on the thumb. The thumb joint is designed to rotate to multiple lateral positions, enabling different grip styles. All mechanical parts are printed as modular components, making them easier to replace or iterate during testing. The hand's internal structure also includes mounting points for motors and wiring channels, integrating both form and function.
Signal Processing & Machine Learning
Raw EMG signals are often noisy and inconsistent due to electrical interference and biological variability. The first step in signal processing involves cleaning up this signal using a series of filters. High-pass filters are used to remove low-frequency motion artifacts, while low-pass filters cut off high-frequency noise. After filtering, the signal is rectified, meaning it's converted to absolute values so that muscle activity can be measured in a consistent, positive-only range. This cleaned-up signal becomes a stable input for further analysis.
Once the signal is filtered, specific features are extracted to capture muscle activity in a way the machine learning model can understand. These features are numerical representations that summarize key characteristics of the signal over short time windows, such as average signal strength, intensity, or rapid changes. These values give the model a clearer picture of how the muscle is behaving, turning messy biological data into structured inputs.
The machine learning model is trained to recognize patterns in the extracted EMG features and link them to specific grip intentions. Using labeled data collected during training sessions, where the user performs muscle activations, the model learns the unique "signature" of each grip type. Once trained, the model runs on the microcontroller in real time. It classifies incoming EMG data into one of several predefined grip positions. Based on its prediction, it sends a signal to the servo motor to rotate the thumb into the appropriate position. This model allows the hand to adapt its behavior based on the user's intent, without needing a physical button or manual switch.
Image sources: OpenAI