=== Introduction of symbiotic human robot interaction ========================
Opening and introduction of studies on symbiotic human-robot interaction
The goal of studies on Symbiotic Human-Robot Interaction is to develop autonomous social robots that can communicate with multiple humans via various communicative means as humans use them. In order to achieve the goal, it is necessary to develop certain devices and technologies: (a) surface skin material and internal structure for safe interaction with humans, (b) robust and flexible speech recognition technology, (c) functions of autonomous context- and task-sensitive communication on the basis of a hierarchical model consisting of desire, intention, and behavior including speech acts, and (d) functions of using multiple communicative means to communicate with multiple persons in social contexts. In this workshop, we will discuss on these issues.
[Keynote speech]: Spoken Dialogue Design for a Human-like Conversational Robot
This talk reviews issues on the spoken dialogue systems for our conversational android ERICA. After introducing several social interaction tasks in which human-like presence and face-to-face communication are important, the design principle of the spoken dialogue system based on a hybrid of dialogue modules is presented. Then, an attentive listening system that allows for flexible and robust interaction is explained, and will be followed by its demonstration.
[1] T.Kawahara. Spoken Dialogue System for a Human-like Conversational Robot ERICA, In Proc. IWSDS (keynote), 2018.
[2] D.Lala, P.Milhorat, K.Inoue, M.Ishida, K.Takanashi, and T.Kawahara. Attentive listening system with backchanneling, response generation and flexible turn-taking. In Proc. SIGdial, pp.127--136, 2017.
=== Research trends in Human robot interaction =============================
[Invited talk 1]: Emergent Implicit Behaviours in Human Robot Interaction
The synchronisation with the others is fundamental skill for collaborations in many typical human activities. Robots interacting in similar context might leverage on similar skills to support these human activities. Interaction between humans and robots that aim at immediate bootstrapping of effective collaboration often relies on pre-existing models of human-human of interaction. Within these models, the concept of synchronisation builds on two concepts: the perfect reading of the others 's actions and the meaningful communication of robot intentions. In this presentation, we will give an overview of our recent studies on emergent behaviours and synchronisation in human robot interaction especially indicating which cues in non-explicit communication promote natural interaction with humanoid robot [1][2]
[1] Noceti N., Rea F., Sciutti A., Odone F., Sandini G., View-invariant robot adaptation to human action timing, IEEE Technically Sponsored Intelligent Systems Conference
[2] Vannucci F., Sciutti A., Jacono M., Sandini G., Rea F., Adaptation to a humanoid robot in a collaborative joint task, RO-MAN 2017 - 26th IEEE International Symposium on Robot and Human Interactive Communication
[Invited talk 2]: The relationships between dialog and conversation
Historically, spoken language interaction between humans and computers evolved from simple command and control, which requires specific language to goal-directed dialog that supports more variable input, though constrained by a specified domain. More recently, with the availability of high-quality speech recognition, the field has begun to investigate so-called socialbots that are expected to talk coherently with humans on almost any subject. These systems, while different, share important characteristics. This talk will examine these commonalities.
===
[Talk 1]: Human robot interaction in a daily environment
Conversational robots are expected to keep a user engaged in conversation. However, these robots sometimes utter comments that are irrelevant topic to the current context owing to a failure in recognizing the human user’s speech or intention. Such a sudden topic shift is considered to interfere with what we call the sense of conversation with which a person can feel as if he or she is participating in a conversation. In this talk, the potential merits of using the group form of multiple robots to provide users with a stronger sense of conversation in a daily environment are discussed.
[1] Arimoto, T., Yoshikawa, Y. & Ishiguro, H. Multiple-Robot Conversational Patterns for Concealing Incoherent Responses. International Journal of Social Robotics (2018). https://doi.org/10.1007/s12369-018-0468-5
[2] Yuichiro Yoshikawa, Takamasa Iio, Tsunehiro Arimoto, Hiroaki Sugiyama and Hiroshi Ishiguro, Proactive Conversation between Multiple Robots to Improve the Sense of Human-Robot Conversation. AAAI 2017 Fall Symposium on Human-Agent Groups: Studies, Algorithms and Challenges. November 9-11, 2017, Arlington, Virginia, pp.288-294
[Talk 2]: Comparing Remote Learning Technologies
Telepresence robots allow people to communicate with and navigate through distant environments via a symbiotic relationship with human users. We are interested in using telepresence robots to augment the abilities of students who need to miss extended amounts of school. In an initial study, we worked with two courses at the University of Southern California (USC) to compare student experiences attending class in three ways: in person, via USC's DEN@Viterbi distance learning tools, and via a telepresence robot. The results revealed that most students preferred the DEN@Viterbi tools, although this attendance method was less immersive and less expressive of the human user. Participants were generally interested in attending class through a telepresence robot, especially if key technical challenges were solved. Future work will address these difficulties and answer additional questions about how telepresence robots can empower students.
[Talk 3]: Learning Motion Primitives and Task Plan in Teleoperated Robot Motion through Multi-modal Interface
Performing complex and dexterous motion coordination is difficult in teleoperation. We propose to use modeling methods to learn motion primitives and task plans through a multi-modal teleoperation system. Through this study, we will learn and compare motion coordination strategies, human performance and the usability of multiple teleoperation user interfaces. The results of this study is important in understanding how human motion strategies adapt to capabilities of remote robots. It also contributes to the design of a more intuitive and assistive semi-autonomous teleoperation interface.s
[Talk 4]: Theoretical aspects for a developmental approach in child-robot interaction
One of the characteristics for symbiotic human-robot interaction is its developmental nature which unfolds through co-existence of and exchanges between humans and robots over long periods of time. However, until recently the vast majority of research studies in the field of human-robot interaction has been mainly focused on short-term well-controlled experimental designs, which do not allow the investigation of the emerging developmental trajectories as they occur in naturalistic situations. This paper suggests the investigation of human-robot interaction from a systemic developmental perspective. The consideration of a developmental approach for long-term symbiotic human-robot interaction can potentially support the design, implementation and evaluation of autonomous robotic systems and consequently an effective human-robot co-development. We choose to focus on the special target group of children, taking advantage of the rapid development which occurs during childhood. First, we outline our motivation; we briefly introduce theoretical approaches from the field of development psychology, which can potentially support the proposed approach; then we discuss their association with existing work with a focus on the case of child-robot interaction. Lastly, we propose future directions for investigation in the emerging field of long-term developmental human-robot interaction.
[Talk 5]: Reinforcement learning for human-robot interaction in a real environment
For a robot interacting with human in a real environment, the ability to learn human-like social skills is crucial since it is not feasible to preprogram all behaviors suited to diverse situation. Reinforcement learning is one of the methods such that an autonomous agent learns its own control rule (policy) by the trial-and-error and RL in real environments become realistic thanks to the recent advances in RL methods using a deep neural network. However, RL for human robot interaction in a real environment is still challenging since the situation the robot faces is diverse and the policy may also depend on the intention of others. In this presentation, we will introduce our current efforts on human robot interaction, especially on the learning of the social skills by a humanoid robot [1], [2].
[1] Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa, Hiroshi Ishiguro, Intrinsically motivated reinforcement learning for human–robot interaction in the real-world, Neural Networks, 2018
[2] AH Qureshi, Y Nakamura, Y Yoshikawa, H Ishiguro, Show, attend and interact: Perceivable human-robot social interaction through neural attention Q-network Robotics and Automation (ICRA), 2017