Human-Robot Interaction for Explainability in Robotics
IEEE International Conference on Robot and Human Interactive Communication, IEEE RO-MAN 2023
Paradise Hotel in Busan, South Korea
28 August
28 August
Senior Lecturer at Faculty of Engineering and IT at the University of Melbourne. Her research aims at creating acceptable and useful assistive robot interactions using social signal sensing, affective and cognitive reasoning and natural expressivity.
Assistant Professor of Information and Computing Sciences at Utrecht University. Her research focuses on people's affective, behavioral, and cognitive responses to robots aiming for the development of socially acceptable robots.
PhD candidate in the School of Computer Science, Faculty of Engineering at the University of New South Wales. Her research is focused on human-in-the-loop robot learning and explainable AI & robotics.
Postdoctoral researcher in the Robotics, Perception, and Learning (RPL) division at KTH. Her research focuses on semantic scene understanding to operate robots effectively in human environments.
PhD student in the Collaborative AI and Robotics Lab at CU. His research focuses on developing and operationalizing novel explainable AI techniques within human-robot teaming scenarios.
We were excited to reactivate the discussions of explainability in an HRI context with ~30 participants in person and an additional ~10 attendees online. After Wafa introduced our motivation for the workshop, we continued with the invited talks. During that session’s Q&A, we talked about the explanation selection problem and whether intelligible verbal explanations may cause potential deception about the robot’s mental model of the world. But also, how we should deal with potential cognitive overload given that interactions with robots are multimodal and people have not (yet) learned what modality to focus on.
Tim Miller presented three main take-aways from the social sciences for explainable robotics, namely that explanations are contrastive, interactive, and selective. During the Q&A, he provided his insights on the impact of LLM’s for explainability and the role of embodiment. The modality of explanation depends on the content of the explanation and the application domain. For some types of explanations, visualizations may work better than verbal explanations. And since robots have physical bodies, users tend to anthropomorphize such agents and hold higher expectations for the explanations provided (compared to AI systems on screens).
The morning ended with an on-the-spot poster session in which attendees created mini posters in small groups. The outcome were insights of differences between explainable AI and explainable robotics, first-person vs. third-person explanations, multimodality in delivering explanations, transferability of study results, influence of relation-history or contextual factors for explanations, the role om embodiment in expectations of explanations, expectation management and sense-making, and how we may learn for research on animal-human interaction for designing explanations in HRI context.
During the workshop, participants collaboratively dedicated 35 minutes to craft posters, each focused on addressing one or more key questions tied to the central theme of the event: defining and designing explainability in Human-Robot Interaction (HRI).
Following this creative session, an exciting poster parade unfolded. During this parade, each participant was granted a brief 1-2 minute window to enthusiastically present their poster to the entire workshop audience.