Explainability for Human-Robot Collaboration

ACM/IEEE International Conference on Human-Robot Interaction, 2024


 

In human-robot collaboration, explainability bridges the communication gap between complex machine functionalities and humans. An active area of investigation in robotics and AI is understanding and generating explanations that can enhance collaboration and mutual understanding between humans and machines.  

A key to achieving such seamless collaborations is understanding end-users, whether naive or expert, and tailoring explanation features that are intuitive, user-centred, and contextually relevant.

Advancing on the topic not only includes modelling humans' expectations for generating the explanations but also requires the development of metrics to evaluate generated explanations and assess how effectively autonomous systems communicate their intentions, actions, and decision-making rationale.  

This workshop is designed to tackle the nuanced role of explainability in enhancing the efficiency, safety, and trust in human-robot collaboration. 

It aims to initiate discussions on the importance of generating and evaluating explainability features developed in autonomous agents. Simultaneously, it addresses various challenges, including bias in explainability and downsides of explainability and deception in human-robot interaction.