Kilian Freitag, Yiannis Karayiannidis, Jan Zbinden, Rita Laezza
Chalmers University of Technology, Gothenburg, Sweden
Lund University, Sweden
Training data recorded for bionic limb control may not accurately represent muscle activity during daily usage. This paper introduces a novel approach to address this distributional shift and improve the usability of a movement classifier through reinforcement learning (RL) techniques. By utilizing RL, we tune a pretrained classifier using an interactive game environment, facilitating interaction-based learning. Our study showcases the successful application of RL in enhancing the usability of the bionic limb controller, resulting in improvements across different tasks. This work contributes to the advancement of personalized and usability-focused learning in the field of bionic upper limb control.
In this initial step, the subject is prompted to repeatedly perform specific movements during a recording session. The recorded electromyography (EMG) data is then labeled to create a dataset. A classifier is fitted to this data using SL to predict movements.
This initial policy is then employed in a game similar to Guitar Hero, as seen on the right. The arrows indicate the predicted movements per degree of freedom.
Once the user completes the song, the newly recorded EMG data from the game becomes a dataset, containing all the RL data from the played songs thus far. Utilizing this data a new classifier is trained and the process is repeated (8 times in our experiment).
Two motion tests are carried out at the end of the experiment. These tests involved using the pretrained model and the latest RL model, in random order. In a Motion Test, each movement is performed and is considered successful when predicted the correct movement for a sufficient time. The next movement appears either after succeeding in the last one or when the timeout is reached.
Pretrained (left)
After RL training (right)
Playback speed: 3x