Objectives

The goal of this workshop is to bring together researchers from robotics and machine learning to share knowledge about deep and probabilistic generative models to develop a future cognitive architecture for robots. The workshop also aims at examining the challenges and opportunities emerging from the interdisciplinary research field covering machine learning, cognitive science, and robotics. Success in deep learning has enabled robots to recognize their environment, e.g., visual and speech recognition, and to learn behaviors efficiently, e.g., reinforcement and imitation learning. However, most of the success in deep learning is heavily depending on labeled data or hand-crafted reward functions that need to be prepared before the learning process. In contrast, human children can learn motor skills, form perceptual categories, obtain planning capability and acquire language through interactions with their environment and other people. The learning process involves enormous and complex multimodal sensor-motor information, and free from hand-crafted labeled data or reward functions. Development of human infants can be regarded as a self-organizing process. In other words, the process should be modeled using unsupervised learning methods. Many types of unsupervised learning methods based on probabilistic generative models have been proposed in cognitive robotics. Integrative probabilistic generative models for multimodal concept formation and language acquisition was proposed and it was shown that robots can form concepts and acquire vocabularies in an unsupervised manner [1]. However, the probabilistic generative models, proposed in the series of studies, only uses conventional distributions, e.g., Gaussian, Dirichlet, multinomial and Wishart distributions, in its hierarchical Bayesian models. They cannot capture the complex structure of the real-world sensor-motor information. In contrast, recently deep generative models, e.g., variational autoencoder and generative adversarial networks, are gaining attention and showing huge potential of unsupervised learning methods based on deep learning [2,3]. However, there are still few applications of such deep generative models to the development of cognitive models in robotics. These advances in deep learning and hierarchical Bayesian modeling for robotics are providing us with new possibilities to develop flexible cognitive architecture in robotics that can integrate high-level and low-level cognitive capabilities, and achieve lifelong multimodal learning in robotics. Such a hierarchical integration of cognitive capabilities is required to enable a robot to use language to communicate and collaborate with people in the real-world environment [4]. Future robots are expected to acquire, use and understand language based on real-world information and achieve a wide range of tasks in the real-world environment. In this workshop, we will investigate how we can create a cognitive architecture for a robot using deep and probabilistic generative models. To this end, we aim to share knowledge about the state-of-the-art machine learning methods that contribute to modeling language-related capabilities in robotics, and to exchange views among cutting-edge robotics researchers with a special emphasis on the usage of deep generative models in robotics and modeling a wide range of cognitive capabilities using probabilistic generative models. The workshop will include keynote presentations from established researchers in robotics, machine learning, and cognitive science. There will be a poster session highlighting contributed papers throughout the day. We believe the topic of this workshop is timely and necessary for the IEEE-RAS community.

[References] [1] Nakamura T, Nagai T, Taniguchi T. Serket: An architecture for connecting stochastic models to realize a large-scale cognitive model. Frontiers in neurorobotics. 2018;12. [2] Kingma DP, Welling M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. 2013 [3] Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In Advances in neural information processing systems 2014 (pp. 2672-2680). {4] Taniguchi T, Ugur E, Hoffmann M, Jamone L, Nagai T, Rosman B, Matsuka T, Iwahashi N, Oztop E, Piater J, Wörgötter F. Symbol Emergence in Cognitive Developmental Systems: a Survey. IEEE Transactions on Cognitive and Developmental Systems. 2018