=====================================================================================
Website: https://sites.google.com/view/x-hri
Date: March 3, 2025
Location: Hybrid (Melbourne and online), as part of the 2025 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2025)
Manuscript submission site: https://easychair.org/conferences?conf=xhri2025
Submission deadline: February 12, 2025
Notification of acceptance: February 19, 2025
Camera-ready deadline: February 26, 2025
Workshop: March 3, 2025
All deadlines are at 23:59 Anywhere on Earth time.
The goal of this workshop is to deepen the conversation on explainability in Human-Robot Interaction (HRI) by bringing together insights from both humanistic/social and technical perspectives.
We aim to challenge the assumptions surrounding AI transparency in HRI, particularly focusing on the concept of explainability.
We aim to explore these challenging questions through a comprehensive panel discussion with scholars from different areas of expertise and a series of lightning talks.
Our panel, titled "The Illusion of Explainability—Are We Misleading Ourselves About AI Transparency in Real-World HRI?'', brings together four distinguished experts in the field, each delivering a 10-minute talk addressing key issues before opening the floor to an in-depth panel discussion.
The objective of the workshop is to systematically explore the following topics:
Whether current AI transparency and explainability in HRI genuinely promote understanding of robot behavior or merely create an illusion of explainability.
The gap between theoretical explainability frameworks and their application in real-world HRI scenarios, where the complexity of interactions often overshadows clear communication.
The role of embodiment and context in shaping what is perceived as explainable behaviour in robots and how this perception influences trust and collaboration.
What are our responsibilities as researchers and developers while developing explainable and transparent AI systems?
Whom to blame, for the unintended side effects? The robot, the human, the lack of explanations, or the illusion of explanations?
Through the diverse perspectives of our panellists and subsequent discussions, we aim to critically assess the state of explainability in HRI and identify future directions for creating genuinely transparent and trustworthy AI systems in practice.
Topics of interest include, but are not limited to:
Using participatory design to achieve explainability
The downsides of explainability
The connection between explainability and trust
What makes an interaction explainable
Metrics to evaluate explainability
Deception in Human-Robot Interaction
Unintended biases in explainability \& how to alleviate them
Transparency in Trustworthy autonomous systems
Explanation generation as a model reconciliation process
Adapting explanation through forming a mental model
Explanation generation
AI Ethics
Bias and misinformation in LLM-based explanations
Few-shot learning and adaptation for LLM in explainable HRI
We invite scientific papers ranging from 3 to 4 pages (including additional pages allocated for references and appendices). Submissions can encompass various types of work, including ongoing projects with preliminary findings, technical reports, case studies, opinion pieces, surveys, and cutting-edge research in the realms of explainability in robotics and AI. All submitted papers will undergo a thorough review process to assess their relevance, originality, and scientific and technical robustness. Authors are asked to adhere to the submission guidelines outlined by HRI2025.
Submissions do not need to be anonymized for review. All manuscripts must be written in English and submitted electronically in PDF format via EasyChair: https://easychair.org/conferences?conf=xhri2025
The accepted papers will be published on the workshop website as well as in arXiv. The authors of the accepted papers will present their work in the format of lightning talks or posters during the workshop.
Authors should use ACM SIG format (use “sigconf” as document class, instead of “manuscript,screen,review”) template files (US letter): https://www.acm.org/publications/proceedings-template
Overleaf template (use “sigconf” as document class, instead of “manuscript,screen,review”): https://www.overleaf.com/latex/templates/acm-conference-proceedings-primary-article-template/wbvnghjbzwpc