This Special Session aims to establish an international forum dedicated to eXplainable Artificial Intelligence (XAI) methods in healthcare, a domain where the need for trustworthy and human-centered AI is particularly critical. In recent years, Artificial Intelligence has achieved remarkable success in modeling complex data and supporting clinical decision-making. However, the opacity of many data-driven systems continues to raise major concerns regarding fairness, accountability, and transparency.
This Special Session welcomes contributions exploring both ante-hoc and post-hoc explainability approaches, from inherently interpretable models, such as those based on fuzzy logic, to advanced explanation techniques applied after training black-box models, such as deep neural networks, and their impact on clinical understanding, decision support, and patient trust.
By bringing together researchers, clinicians, industry practitioners, and policy experts, this session aims to foster cross-disciplinary dialogue on how transparent, explainable, and human-centric AI can enhance the safety, adoption, and societal acceptance of intelligent systems in healthcare.
Possible topics related to application in the healthcare domain, include (but are not limited to):
Fuzzy and neuro-fuzzy approaches for interpretable medical AI
Explainable neural networks and hybrid neuro-symbolic models
XAI for medical data streams, multimodal and temporal data
Evaluation and visualization of explanations in clinical contexts
Human-in-the-loop and participatory approaches to medical AI