Talk: Friends or Foes? The Influence of Explainable Social Robots in Human-Robot Collaboration
As social robots become increasingly integrated into human environments, the need for transparency and trust in human-robot interaction (HRI) grows accordingly. This presentation explores the role of explainable artificial intelligence (XAI) in enhancing collaboration and decision-making within HRI contexts. Through a series of experimental studies, we examine how different types of robot explanations influence users’ understanding, trust, and performance. The tasks include collaborative games and decision-making simulations, highlighting that following artificial agents’ recommendations is not always beneficial. Additionally, we address the social dimension of robot tutoring and its impact on user behavior, particularly in scenarios involving incorrect guidance. The discussion considers both the potential and risks of socially intelligent, explainable robots, emphasizing the need for careful design to maximize their benefits while minimizing unintended consequences.
"Passionate about explainable artificial intelligence (XAI), human-robot interaction (HRI), and human cognition and biases. I am the only computer scientist who dreams of becoming a cognitive scientist."
Dr. Marco Matarese (he/him) is a post-doctoral researcher at the Italian Institute of Technology. His main research interests range from the robots’ influence to explainability in collaborative human-robot interaction. He obtained a bachelor's and an M.Sc. in computer science at the University of Naples Federico II, and subsequently completed a PhD in bioengineering and robotics at the University of Genoa and the Italian Institute of Technology. He collaborates with several institutions, including Trinity College Dublin, and the Universities of Paderborn, Bielefeld, Bratislava, and Naples "Parthenope".
Talk: Joint Action as a cognitive model for human-robot collaboration
Collaborating with robots is harder than collaborating with humans for sufficiently complex tasks. There are many underlying causes which can roughly be divided into low-level co-ordination of forces using haptic and proprioceptive feedback, and more high-level coordination of action, action goals and strategic decisions. It is the latter that is referred to as joint action. From human-human interaction studies it is clear that humans co-represent one another to be able to anticipate actions of the other, and that humans are able to incorporate the other’s actions into one’s own action plan when collaborating. Joint action with robots can be notoriously hard as humans have difficulty understanding the “intentions” of the robot. In this talk I discuss robots can be designed to express non-verbal cues in order to communicate their intentions. The idea is that by using appropriate non-verbal cues, people will form more accurate mental models of the robot making it easier to anticipate the robot’s actions. I will present some lab experiments where we test whether a cognitive architecture that facilitates joint action indeed helps people anticipate on the robot’s actions.
"For robots to understand people, people must first understand robots."
Prof. Raymond Cuijpers is Associate Professor of Cognitive Robotics and Human-Robot Interaction at Eindhoven University of Technology (TU/e). His research focuses on socially intelligent robots, AI for cognitive agents, and how humans perceive and interact with robots through vision, touch, and movement.
With a background in Applied Physics (TU/e) and a PhD in Physics of Man (Utrecht University), Raymond has held research positions across leading Dutch institutions. He has contributed to major European projects, including coordinating the FP7 KSERA project, which developed socially assistive robots for elderly care. His work blends insights from AI, robotics, and cognitive science to make robots more intuitive and socially aware in human environments.
Talk: "Empowering Users through Human-AI Shared Regulation: A Psychological Approach to Assistive Robotics"
Self-regulated learning (SRL) is a conceptual framework utilized to comprehend the cognitive, motivational, and emotional dimensions of individual learning within educational contexts. Recent technological developments have sparked concerns regarding whether SRL can be enhanced through the utilization of Artificial Intelligence (AI). Jarvela and colleagues (2023) have developed a model termed Human-AI Shared Regulation in Learning (HASRL), wherein AI aids SRL through four stages: detect, diagnose, act, and learn. We propose the application of this model in the realm of assistive robotics to better support and address the unique needs of each individual. By leveraging HASRL in assistive robotics, we aim to enhance users' self-regulated learning processes, fostering greater independence and engagement. This approach not only tailors support to individual needs but also empowers users to develop self-efficacy and adaptive coping strategies.
Talk: From Voice to Touch: The Next Frontier in Human-Robot Interaction
Recent advancements in End-to-End generative AI have largely concluded research primarily focused on voice-based, face-to-face communication. At the same time, humanoid robots have made considerable progress in full-body movement and object manipulation. These developments suggest that future research will leverage End-to-End AI to its fullest potential, shifting towards human-robot interaction in the real-world physical environment. This includes addressing the challenges of resolving the ambiguities that naturally arise during physical activities, as well as managing interactions involving physical contact between humans and robots, such as in caregiving contexts. In this talk, I will examine these two critical challenges, sharing research examples from the Cabinet Office Moonshot Program that aim to address these issues and demonstrate how ongoing advancements are paving the way for more effective human-robot interactions in real-world settings.