The control methodology of the robotic hand is shown on the left
Black - Manual Control Pathway:
1. Human Interaction: The process commences with the operator, who manipulates an XBOX controller to send button information to the computer.
2. Computer Processing: Once receiving the inputs, the computer forwards the information to an Arduino board via serial communication.
3. Arduino Control: The Arduino board, armed with specific programs, translates the signals into precise movements, thereby controlling the motion of the robotic hand.
Red - Tactile Feedback Pathway:
1. Tactile Sensing: The computer retrieves readings from the tactile sensors located on the tip of robotic hand.
2. Local Analysis: The tactile information is then analyzed locally on the computer with different methods, in this project Graphic Neural Network Method and Threshold Method is used for slip detection.
3. Haptic Feedback: Based on the sensor readings, vibrational feedback is sent to the controller, immersing the operator with a tactile experience.
4. Optional Closed-Loop Control: The operator may decide whether to engage the closed-loop control. If activated, the computer will interface with the Arduino board via serial communication and autonomously regulate the gripping force, based on the sensor readings.
This control structure can provide the user with a immediate and intuitive feedback, allowing a responsive and effective manipulation of the robotic hand.
Workflow of integrating tactile feedback to the soft gripper
Fig. 3 Tactile sensor readings and contact force recorded from a Threshold based grip test: Illustration of the stable period, oscillation period during grasping, identification of the grasping starting point, and the minimum force threshold.
To determine the minimum force to securely grip objects without slipping, a calibration method of gradual reduction in gripping force is employed. The graph shows the curve of the contact force during an object grasping process. We can easily identify the initial sensor noises, the oscillation period of unstable grasping, and the stable period of grasping. We can calculate the noise level when the gripper hasn't started to grasp objects. After filtering sensor noise, we can obtain a reasonable threshold value by calculating the mean of the remaining non-zero readings. We can infer that the lowest reading during this oscillation represents the minimum force threshold.
However, threshold-based methods require manual tuning of thresholds to adapt to different conditions. Therefore, we explore machine learning-based approaches, which can adapt to new operating conditions without re-tuning.
Thus, we collected a dataset for GNN training from experiments using a threshold-based method, categorizing samples under the minimum force threshold as 0 shown as red in the figure(non-slip) and those above as 1 shown as green in the figure (slip). The dataset encompasses 2100 training and 700 test samples. In this study, a three-layer GCN model trained on this dataset could achieve a high test accuracy of 96.2%
Fig.4 GCN Architectures with 3 layers for sensor data integration. Blue dots represent sensor nodes, connected within sensor arrays by red lines for local feature extraction, and across arrays by green lines for data integration.
GNNs excel at learning complex and nonlinear relationships in non-Euclidean. This ability makes GNN models more accurate at detecting slip events compared to simplistic thresholding methods that rely on fixed rules. Additionally, they can infer the slip state smoothly even when encountering different objects without any manual tuning of threshold parameters.
The effectiveness of feature learning is highly dependent on the comprehensiveness of the training data. Thus, we collecteda dataset from experiments using a threshold-based method, categorizing samples under the minimum force threshold as 0 (non-slip) and those above as 1 (slip), as depicted in Fig. 6. The dataset encompasses 2100 training and 700 test samples. In this study, a three-layer GCN model trained on this dataset could achieve a high test accuracy of 96.2%
Another important parameter is the nodes' edge, it describes how the each node is connected with each other. Given the utilization of three 4×4 array sensors, each node (pixel) is interconnected not only with its immediate neighbors but also with its corresponding nodes in the other sensors. This is depicted in the figure provided. Nodes will connect to their neighbors within the same layer, as shown by red edges. Additionally, nodes link to corresponding nodes in adjacent layers, illustrated by the black lines. This structure enables the GNN to process local data and across layers, capturing spatial relationships between different array sensors more effectively.