CALL FOR PAPERS
Interpretability in Machine Learning is the capability to present, in human understandable terms the reasons why a model produces a specific output from a given input.
Going a step further, causality seeks to identify potential causal relationships rather than mere correlations.
These two aspects are particularly crucial in the healthcare field, where the development of machine learning-based models that clinicians can trust necessitates ensuring that the model's predictions are interpretable and align with established medical knowledge. This alignment enables experts in the field to understand and trust the outputs, thereby requiring the decision-making process to be as transparent as possible, ensuring that predictions can be explained in a medically relevant and sound fashion.
Furthermore, the decisions generated by the model should not adversely affect the patient. In this sense there is a demanding need for accurate causal inference to understand and predict the outcomes of medical interventions correctly.
Additionally, the ethical implications of the model’s decisions must be considered to ensure fairness and accountability.
Lastly, the model should be optimized to address comprehensive healthcare objectives, which include both clinical outcomes and patient satisfaction, reflecting the complexity of medical decision-making.
The workshop will cover a range of topics from the development of interpretable machine learning models to the application of causal inference methods in medical datasets. Emphasis will be on how these technologies can be used to foster transparency, trust, and ethical responsibility in medical decision-making and research. Sessions will include presentations on state-of-the-art research, practical challenges, and innovative solutions in making complex models interpretable and in identifying true causal relationships from medical data.
PROGRAM INTEREST TO THE BIBM COMMUNITY
The workshop is highly relevant to the BIBM community due to its direct implications for enhancing the safety, effectiveness, and efficiency of biomedical interventions through advanced computing. It addresses the growing demand for AI systems in medicine to be both transparent in their functionality and accurate in their predictions regarding causal relationships.
TOPIC OF INTEREST
Techniques and best practices for developing interpretable machine learning models in healthcare.
Foundations of causal inference and its application in medicine.
Innovations in Explainable Artificial Intelligence (XAI) for clinical decision support systems.
Graphical models for causal discovery and their use in biological networks.
Applications of causal discovery in genomics, proteomics, and systems biology.
Case studies demonstrating the successful application of XAI and causal models in medical diagnostics, treatment planning, and epidemiological studies.
Comparative studies highlighting the difference between correlation and causation in clinical data analysis.
Integration of multi-omic data for causal analysis and pathway discovery.
PROGRAM
The workshop will take place on December 3, 2024. The program PROGRAM is now available.
PAPER SUBMISSION, REGISTRATION AND PUBLICATION
Please submit a full-length paper (up to 8 page IEEE 2-column format) through the online submission system (you can download the format instruction here:
http://www.ieee.org/conferences_events/conferences/publishing/templates.html
Electronic submissions (in PDF or Postscript format) are required. Selected participants will be asked to submit their revised papers in a format to be specified at the time of acceptance.
Online Submission: wi-lab.com/cyberchair/2024/bibm24/index.php
IMPORTANT DATES
Oct 20, 2024 (extended): Due date for full workshop papers submission
Nov 5, 2024: Notification of paper acceptance to authors
Nov 21, 2024: Camera-ready of accepted papers
Dec 3-6, 2024: Workshops
WORKSHOP ORGANIZER
Chiara Zucco, University Magna Graecia of Catanzaro, Italy
Marianna Milano, University Magna Graecia of Catanzaro, Italy
PROGRAM COMMITTEE (TO BE CONFIRMED)
Marzia Settino, University of Calabria, Italy
Mario Cannataro, University Magna Graecia of Catanzaro, Italy
Maria Chiara Martinis, University Magna Graecia of Catanzaro, Italy
Giuseppe Agapito, University Magna Graecia of Catanzaro, Italy
Pietro Cinaglia, University Magna Graecia of Catanzaro, Italy
Ilaria Lazzaro, University Magna Graecia of Catanzaro, Italy