The starred authors(*) are co-first authors and contributed equally.

Composition of Robot Motions by Designing Dynamical Systems in RNN


    For a general-purpose motion generation of robots, selecting and switching motions that are appropriate to the situation is essential. In a human-robot collaboration, robots can assist humans to work in various situations by generating motions based on external instructions. Additionally, robots operating alone should automatically select motions based on the current situation recognized from sensors. However, when the number of situations and variations in the required robot tasks increases, it becomes difficult to design all motion patterns in robotics. Recent studies into robot manipulation using DNN have primarily focused on single tasks. Therefore, we investigated a robot manipulation model that uses DNNs and can execute long sequential dynamic tasks by performing multiple short sequential tasks at appropriate times.     We proposed a method to design a dynamical system in a multi-timescales RNN [1,2]. Although a RNN can embed multiple motion sequences, it is not easy to represent motion switching considering their internal states are usually learned independently for each sequence. In the proposed method, subtask's initial and final robot postures are designed to be common, and the RNN is trained to ensure the internal state can be switched depending on the input. The RNN comprised two kinds of neuron groups with different time constant values: low-level neurons to learn fast-changing dynamics, and high-level neurons to learn slow-changing dynamics. Additionally, we define the learning constraints that bring the values of the internal states of the low-level neurons at the beginning and end of the motion sequence closer together to achieve the explicit motion switching according to the instructions input to the model.    In addition, we proposed a compensation method for the undefined behaviors using a separate controller [3]. In this method, the output of the model is switched to a model-based controller that can perform stably in undefined behaviors. For appropriate controller switching, it is necessary to detect the undefined behaviors. We assume that the internal states of the RNN embeds the information necessary for the task execution and add neurons that can predict the previous motion trajectories to the middle layer of the RNN, which is trained for the main robot task. Furthermore, by comparing the prediction results with the actual motion trajectories, we determine whether or not the current robot posture is included in the training dataset distribution. The proposed method evaluates the internal dynamics without changing the weights of the original model, thereby enabling switching between the undefined and defined behaviors without degrading the task performance of the learning-based controller.
    1. Kei Kase, Kanata Suzuki, Pin-Chu Yang, Hiroki Mori, Tetsuya Ogata: Put-In-Box Task Generated from Multiple Discrete Tasks by a Humanoid Robot Using Deep Learning, Proceedings of 2018 IEEE International Conference on Robotics and Automation (ICRA'18), pp.6447-6452, acceptance rate 40.6%, Brisbane, Australia, May 21-25th, 2018.
    2. Kanata Suzuki, Hiroki Mori, Tetsuya Ogata: Motion Switching with Sensory and Instruction Signals by designing Dynamical Systems using Deep Neural Network, IEEE Robotics and Automation Letters, vol.3, issue.4, pp.3481-3488, 2018.
    3. Kanata Suzuki, Hiroki Mori, Tetsuya Ogata: Compensation for undefined behaviors during robot task execution by switching controllers depending on embedded dynamics in RNN, IEEE Robotics and Automation Letters, vol.6, no.2, pp.3475-3482, 2021 (presented at ICRA'21).