Human-in-the-loop XAI systems – Iterative feedback, dialogue-based interactions, and continuous refinement of explanations.
Adaptive and context-aware explanations – Tailoring explanations’ complexity, detail, and style to individual users, situational contexts, and cultural backgrounds.
Multimodal explanation strategies – Using visual, textual, auditory, and augmented reality elements to enhance comprehension.
Engaging users with adaptive/reinforcement learning – Personalizing explanations using reinforcement and active learning.
Balancing comprehensibility with faithfulness – Ensuring explanations are user-friendly while accurately reflecting the AI model’s logic.
Exploring perceptual and cognitive mechanisms for enhancing XAI explanations – Theories and frameworks for designing user-centered explanations.
Measuring cognitive load – Evaluating the complexity of explanations and their impact on user understanding.
Impact of explanation formats in human-centered XAI – How text-based, visual, and interactive explanations influence comprehension and decision-making.
User-centric evaluation frameworks for XAI – Assessing usability, engagement, and personalization in explanations.
Evaluating human-centered explanations – Measuring effectiveness and clarity of explanations from a user perspective.
Explanation pitfalls in user-centered XAI – Identifying misleading or ineffective explanation designs.
Overreliance and unjustified trust in explanations – Understanding why users might place too much trust in AI-generated explanations.
Mitigation strategies for cognitive bias in explanation interpretation – Reducing biases in how users interpret AI-generated explanations.
Dark patterns and malicious use of XAI – Detecting manipulative design patterns that obscure AI bias or mislead users.
Interpretation problems for different explanation formats and XAI approaches – Investigating differences in interpretability across feature importance plots, counterfactual explanations, rule-based approaches, and narrative descriptions.
Real-world applications of human-centered XAI – Case studies and best practices from various industries.
Interactive systems for human-centered explanations of Transformer models – Improving user understanding of attention mechanisms and decision processes in Transformer-based models.
Interactive systems for human-centered explanations in Large Language Models (LLMs) – Techniques for making LLMs more transparent and interpretable.
Enhancing interpretability in white-box systems through user-centered approaches – Leveraging user input, domain expertise, and interactive interfaces to refine inherently interpretable models.
Visual analytics systems and their user-centered potential – Integrating visual analytics tools to enhance explainability and user interaction with AI models.