Finance Economics and Econometrics Lab

Seminar Series

2023 - 2024


 

Speaker: Caner Canyakmaz (TBS).


Title: “Beyond the Black Box: Unraveling the Role of

Explainability in Human-AI Collaboration ”.

joint with Tamer Boyaci and Francis de Véricourt (all ESMT Berlin).

Date: Thursday, March 21st at 12h30 (Paris Time).

Abstract: While AI-based decision tools are increasingly employed for their ability to enhance collaborative decision-making processes, challenges such as overreliance or underreliance on AI outputs pose risks to their efficiency in achieving complementary team performance. To address these concerns, explainable AI models have been increasingly studied. Despite the promise of bringing transparency and enhanced understanding of algorithmic decision-making processes, evidence from recent empirical studies has been quite mixed. In this paper, we bring a theoretical perspective on the pivotal of AI explainability in mitigating these challenges. We develop an analytical model that incorporates the defining features of human and machine intelligence, capturing the limited but flexible nature of human cognition with imperfect machine recommendations and explanations that reflect the quality of these predictions. We then systematically investigate the multifaceted impact ofexplainability on decision accuracy, underreliance, overreliance, as well as users’ cognitive loads. Our results indicate that while low explainability levels have no impact on decision accuracy and reliance levels, they lessen the cognitive burden of the decision-maker. On the other hand, providing higher explainability levels enhances accuracy by improving overreliance but at the expense of higher underreliance. Furthermore, the incremental impact of explainability (c.f. a black-box system) is higher when the decision-maker is more cognitively constrained or when the stakes are lower. Surprisingly, we find that higher explainability levels can escalate the overall cognitive burden, especially when the decision-maker is particularly pressed for time and initially doubts the machine’s quality, scenarios where explanations are expected to be most needed. By eliciting the comprehensive effects of explainability on decision outcomes and cognitive efforts, our study contributes to our understanding of designing effective human-AI systems in diverse decision-making environments.

Here is a link to the FEELab website:

https://sites.google.com/view/feelabtbs/

You are cordially invited to participate in the seminar, which will take place in Room 321, Lascrosses building.