EXEC-MAN
EXplainable hEalthCare data
Management and ANalytics
at ADBIS 2023 - September 4, 2023 - Barcelona
at ADBIS 2023 - September 4, 2023 - Barcelona
Medicine and healthcare management systems are high stakes domains that now heavily rely on the recent progress of machine and deep learning, coined as “AI” (Artificial Intelligence) to support complex decision tasks. Complex data acquisition, pre-processing and then analysis pipelines are set up to answer questions related to the medical situation of patients or in a prospective way to learn more about diseases, healthcare path, treatments, physiological, chemical or biological processes involved in medicine and healthcare. As such, these decision systems delegate to end-to-end data-driven systems with no or little regulation, decisions that generally have a crucial impact on the life of patients.
EXEC-MAN aims to bring together an interdisciplinary audience interested in the fields of explainable AI as well as healthcare data management and analytics to discuss the unique challenges in explaining and bringing transparency to automated intelligent decision systems based on AI applied to medicine and healthcare data.
At the level of physician, medical staff and patients alike, the use of automated AI-based decision systems raises severe concerns about their accountability, fairness as well as their lack of transparency. One commonly agreed upon solution is to use post-hoc explanation methods to gain insights, afterwards, on how and why some decision made by the system. Other approaches such as saliency maps or attention-based mechanisms allow to gain insight on which neurons impact most decision in deep architectures. As pointed out in [1], ”incorporating explanations in the medical domain with respect to legal and ethical AI is necessary to understand detailed decisions, results, and current status of the patient’s conditions”. Finally, in the healthcare and medicine domains, physicians are not common users of a prediction model: they need to make sure to understand how it works by observing, for example, the variables that participated the most in its construction and in its output. They also must be able to understand and evaluate the predictive model, and to be able to interact with the provided explanations so as to gain trust and confidence. Finally, physician should be able to tailor these tools to their needs by submitting suggestions for improvement.
This workshop aims at gathering XAI enthusiasts as well as specialist of the healthcare data management and analytics domain to:
better characterize the challenges faced when explaining model, data, or complete pipelines in this particular domain,
determine what are good explanations in this context and if the variety of scenarios in healthcare data management and analytics calls for a variety of specifically tailored definitions and solutions in XAI,
determine ways of interactions between physician and the intelligent systems
The objective of this workshop is to emphasize on the XAI problem with the specific application to healthcare and medicine domains in mind.
As such, scientific challenges of interest relate to new explainable ”AI” methods and / or any data management and analytics pipeline in the healthcare domain as well as human in the loop for everything related to the interaction of physicians with models and or explanations.
The workshop venue will be the Universitat Politècnica de Catalunya (UPC), on Campus Nord, where the conference ADBIS will take place.
[1] Sheu, R., and Pardeshi, M. S. A survey on medical explainable AI (XAI): recent progress, explainability approach, human interaction and scoring system. Sensors 22, 20 (2022), 8068.