Workshop description
Artificial Intelligence (AI) has shown significant promise in transforming medical diagnosis, treatment, and patient care. While data-driven systems can create highly accurate models, there are concerns about their fairness. The opacity of many AI models can be particularly problematic in critical domains like healthcare. This proposal aims to prioritize the development of Explainable Artificial Intelligence (XAI) in the medical field, emphasizing its importance for enhancing transparency, accountability, and trustworthiness. XAI provides understandable insights into AI-powered clinical decision-making, which helps healthcare professionals comprehend and trust the recommendations made by AI systems. This transparency is crucial for informed decision-making in patient care. Additionally, patients are a vital part of the medical process, and XAI ensures that AI-generated outcomes are communicated transparently, fostering patient trust and understanding. Clear explanations of diagnoses and treatment recommendations empower patients to participate actively in healthcare decisions. Global regulations, such as the European GDPR and AI Act, along with the American DARPA, increasingly emphasize the importance of transparency and interpretability in AI applications for healthcare. Compliance with regulations necessitates incorporating XAI in medical AI systems. XAI helps address ethical concerns related to biased or discriminatory outcomes in medical AI. XAI identifies and mitigates biases by providing insight into the decision-making process, ensuring fair and equitable healthcare practices. Finally, clinicians require confidence in the reliability and validity of AI systems. XAI aids in the validation process by offering insights into model predictions and facilitating the integration of AI technologies into clinical workflows. This workshop aims to explore and exhibit research, methodologies, and case studies that focus on integrating Explainable Artificial Intelligence (XAI) in the medical domain. It will provide a platform for researchers, practitioners, and policymakers to share their insights and advancements in XAI. The purpose is to improve transparency and trust in medical AI systems. The workshop aims to highlight the importance of XAI in medical decision-making, share innovative approaches and technologies that enhance interpretability in medical AI, and discuss regulatory implications and compliance strategies for incorporating XAI in healthcare AI applications.
Possible topics related to application in the healthcare domain, include (but are not limited to):
eXplainable Artificial Intelligence
Post-hoc methods for explainability
Ante-hoc methods for explainability
Rule-based XAI systems
Uncertainty modeling
XAI methods for neuroimaging and neural signals
Case-based explanations for AI systems
Fuzzy systems for explainability
Interpreting and explaining neural networks
Model-specific vs model-agnostic methods
Transparent and explainable learning methods
Interpretable representational learning
Causal inference and explanations
Bayesian modeling for interpretability
LLM for explanations
Human-Centered XAI