01.Towards Science of Interaction Design: Cognitive Developmental Robotics Revisited.pdf

Authors: Minoru Asada

Towards Science of Interaction Design: Cognitive Developmental Robotics Revisited

Abstract: This paper examines the contemporary significance of the fundamental concepts of cognitive developmental robotics (CDR): embodiment and social interaction. We delve into the various forms of interaction that blur the boundary between embodiment and social interaction. Furthermore, we explore the impact of the latest AI technology and its implications for interaction. Finally, we provide a summary of human robot interactions and discuss future issues.

02.Adaptive Task Sharing in Collaborative Robotics.pdf

Authors: Thomas Faure, Hugo Decroux, Jean-Marc Salotti

Adaptive Task Sharing in Collaborative Robotics

Abstract: In a flexible collaborative task sharing framework, it is necessary to take situation awareness issues into account. A robot must be able to “understand” the context of the collaboration, especially the difficulty of the task, the willingness of the human to collaborate or his availability. We propose to implement a robotic decision process based on the situation awareness model defined by Endsley. An experiment has been carried out to illustrate an adaptive collaboration in which the robot takes the initiative and fulfill the task that was assigned to the human.

03.Modelling interacting human drivers to enhance vehicle development.pdf

Authors: Olger Siebinga, Arkady Zgonnikov, and David Abbink

Modelling Interacting Human Drivers to Enhance Autonomous Vehicle Development

Abstract: A major open problem in the development of au- tonomous driving is how to handle interactions between traffic participants. Understanding human behaviour in these interactions could help solve this problem. The common approach for such interaction models is to use game theory. However, game theory makes three strong assumptions: firstly, that humans are rational utility optimizers; secondly, that there is no communi- cation between drivers; and thirdly (in most models), it only describes high-level choices (e.g., to merge or yield). To address these issues, we have introduced the Communication-Enabled Interaction (CEI) model. In four simulated scenarios, our model shows plausible interactive behaviours. We envision multiple different use cases for this model to improve the interactive driving behaviour of autonomous vehicles, both online (in the vehicle) and offline (during development). We hope our model can thereby help with safe and acceptable interactions with autonomous vehicles.

04.Real-time Gesture Recognition in Industry.pdf

Authors: Afra Marıa Pertusa Llopis, Adriana Costa Lopez, Jawad Masood

Real-time Gesture Recognition in Industry

Abstract: Computer vision-based gesture recognition and cog- nitive modeling connectivity can play a key role in developing efficient algorithms and understanding the neural mechanism behind multiple human communication. It leverages to detect, track, and interpret human body movements in intuitive ways so that we can control and interact with robots in a safe manner. In this paper, we present our experience of deploying real-time gesture recognition based on the multiple industrial camera system. A new strategy is presented based on YOLO and OpenPose to reach 13 fps with robustness to detect, track, and decompose movements of multiple humans wearing different clothing. YOLO was used to detect and track the person and the background was eliminated to focus only on the area in the image that represents humans. OpenPose was used to detect the 15 key point information from each human bounding box. This precise key point information was then used to construct the human 2D skeleton for gesture recognition. The approach yields robust and stable results with limitations pertaining to human crossing and human cloth-to-background color matching.

05.Language Reasoning with Visual Cues for Robotics.pdf

Authors: Alexander Radchenko, Kasim Terzic, Alice Toniolo and Juan Ye

Language Reasoning with Visual Cues for Robotics: An Explorative Experiment

Abstract: Large language models (LLMs) are making sig- nificant progress in a wide range of applications and have recently been applied to robotic planning. LLMs have the benefit of allowing users to instruct the robot using natural language, and the ability to explain their plan to the user, both of which can potentially improve interaction with humans and allow for easier adaptation. But are plans generated this way robust enough for real-life use? We present an early experiment designed to test the limits of such an approach and outline the challenges ahead.

06.Cognitively inspired vision architecture for adaptive learning of compositional language.pdf

Authors: Daniel Koudouna, Kasim Terzic

Cognitively Inspired Vision Architecture for Adaptive Learning of Compositional Language

Abstract: To facilitate a common representation for human- robot communication, and to allow robots to interact with changing environments, we propose a cognitive architecture to answer language queries, inspired by the theory of conceptual spaces. We focus on the gap between the human learning of new visual concepts from few examples and the adaptation to new environments, and the current state-of-the-art models which require millions of annotated training examples, and often lack the ability to generalize to unseen environments. We introduce a dataset for learning individual language constructs without bounding boxes. Our approach compares favourably with state-of-the-art approaches designed for similar tasks, with advantages which make it suitable for a video dataset.

07.Bayesian Inverse Motion Planning for Online Goal Inference in Continuous Domains.pdf

Authors: Tan Zhi-Xuan, Jovana Kondic, Stewart Slocum, Joshua B. Tenenbaum, Vikash K. Mansinghka, Dylan Hadfield-Menell

Bayesian Inverse Motion Planning for Online Goal Inference in Continuous Domains

Abstract: Humans and other agents navigate their environments by acting efficiently to achieve their goals. In order to infer agents’ goals from their actions, it is thus necessary to model how agents achieve their goals efficiently. Here, we show how online goal inference and trajectory prediction in continuous domains can be performed via Bayesian inverse motion planning: By modeling an agent as an approximately Boltzmann-rational motion planner that produces low-cost trajectories while avoiding obstacles, and placing a prior over goals, we can infer the agent’s goal and future trajectory from partial trajectory observations. We compute these inferences online using a sequential Monte Carlo algorithm, which accounts for the multimodal distribution of trajectories due to obstacles, and exhibits better calibration at early timesteps than a Laplace approximation and a greedy baseline.

08.Design of Human-Aware Robotic Decision Support Systems.pdf

Authors: Manisha Natarajan, Chunyue Xue, Karen Feigh and Matthew Gombolay

Design of Human-Aware Robotic Decision Support Systems

Abstract: Advances in robotics and artificial intelligence (AI) have enabled the possibility of human-robot teaming. One potential avenue for collaborative robots is to provide decision-support for human partners in complex decision-making tasks. However, such agents are imperfect in real-world scenarios and may provide incorrect or suboptimal recommendations. Thus, it is imperative for human collaborators to understand when to trust the robot’s suggestions for maximizing task performance. Explainable AI (xAI) attempts to improve user understanding by providing explanations or rationales for agent recommendations. However, constantly providing explanations is unnecessary and can induce cognitive overload among users. In this work, we propose a POMDP framework that allows the robot to infer the users’ latent trust and preferences to provide appropriate and timely explanations for maximizing human- robot team performance in a sequential decision-making game.

09.Object assembly using inferred causal models from humans.pdf

Authors: Semir Tatlidil, Semanti Basu, Kaishuo Zhang, F. Tao Burga Montoya, Steven Sloman, R. Iris Bahar

Object Assembly Using Inferred Causal Models from Humans

Abstract: People rely heavily on causal information to com- plete tasks. While useful, learning causal models require experimental interventions of variables which can be very costly for complex tasks. To overcome this problem, we propose to infer causal models from people by asking a series of simple questions about the results of imaginary interventions. We run a preliminary online study with na ̈ıve participants to infer causal models, and demonstrate how these models can be used to improve a planning algorithm in an object assembly task.