Introduction (15 min)
Objectives of the tutorial
Overview of the research lab where the framework was developed
Core Concepts and Framework (30 min)
Explainable AI (XAI): Definition and core concepts
Uncertainty in AI decision-making: Modeling and implications
Interaction dynamics between humans and AI systems
AI System Evaluation (30 min)
Calibration of AI systems and its impact on reliability
Robustness as a holistic quality dimension
The limitations of accuracy as an evaluation metric
AI’s Role in Decision Support (30 min)
Metrics for assessing AI’s impact on human decisions
Evaluating appropriate reliance on AI
Empirical Impacts (45 min)
Understanding user experience and trust calibration
The "White Box Paradox" and the risk of misleading explanations
Patterns of human reliance on AI (automation bias, conservatism bias, skill erosion)
Challenges in AI Integration (45 min)
The balance between automation bias and conservatism bias
Measuring over-reliance and under-reliance in AI-supported decision-making
Evidence-based design of human-AI interaction protocols
Practical Applications and Case Studies (45 min)
Application of these frameworks in healthcare and other high-stakes domains
Analysis of real-world case studies demonstrating AI's role in decision-making
The effects of AI support on different categories of professionals
Conclusion and Interactive Q&A (30 min)
Summarizing key insights from the tutorial
Open discussion on AI reliance and decision-making frameworks
Addressing participant questions and discussion on future research directions