As AI systems increasingly support decision-making in high-stakes domains, it is crucial to move beyond traditional accuracy metrics and examine their broader implications. This tutorial introduces participants to a methodological framework that critically assesses AI’s influence on human reliance, trust calibration, and decision autonomy. Drawing on empirical findings and case studies, we will explore how cognitive and socio-psychological factors shape human-AI interaction.
The session is tailored for researchers, practitioners, and policymakers who seek to understand the complexities of AI-supported decision-making. Attendees will gain hands-on experience with tools and metrics that evaluate AI’s effectiveness, learning to identify risks and benefits associated with AI reliance. The tutorial builds on insights from recent conferences (CHI’23, HHAI’23, HCII’24) and provides a synthesis of ongoing research in AI ethics and human factors, emphasizing the need to go beyond mere accuracy metrics when designing and assessing AI systems, focusing instead on their socio-technical implications. By doing so, it contributes to discussions on trust calibration, cognitive biases, and the co-evolution of human decision-making processes alongside AI systems.