Friday, July 23rd, 09:15 AM - 17:30 PM (Eastern Time)
Click the 'Join IMLH' bottom to join the workshop hosted on Zoom and GatherTown.
Morning Session I (Moderator: Vicky Yao)
09:15 - 09:30 Welcoming remarks and introduction (Dr. Yuyin Zhou)
09:30 - 10:00 Invited talk #1 Mihaela van der Schaar (Title: Quantitative epistemology: conceiving a new human-machine partnership)
10:00 - 10:30 Invited talk #2 Archana Venkataraman (Title: Integrating Convolutional Neural Networks and Probabilistic Graphical Models for Epileptic Seizure Detection and Localization)
Oral #1
10:30 - 10:35 Reliable Post hoc Explanations: Modeling Uncertainty in Explainability [Supplementary] [Video] (Long paper)
10:35 - 10:40 Interpretable learning-to-defer for sequential decision-making [Video] (Long paper)
10:40 - 11:30 Posters I and coffee break (click to see the full list)
MACDA: Counterfactual Explanation with Multi-Agent Reinforcement Learning for Drug Target Prediction
Causal Graph Recovery for Sepsis-Associated Derangements via Interpretable Hawkes Networks [Supplementary]
Learning Robust Hierarchical Patterns of Human Brain across Many fMRI Studies [Supplementary]
Using Associative Classification and Odds Ratios for In-Hospital Mortality Risk Estimation
Reinforcement Learning for Workflow Recognition in Surgical Videos
TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation
Tree-based local explanations of machine learning model predictions – AraucanaXAI
Counterfactual Explanations in Sequential Decision Making Under Uncertainty
Transfer Learning with Real-World Nonverbal Vocalizations from Minimally Speaking Individuals [Supplementary]
Online structural kernel selection for mobile health
Hierarchical Analysis of Visual COVID-19 Features from Chest Radiographs
Towards Privacy-preserving Explanations in Medical Image Analysis
Prediction of intracranial hypertension in patients with severe traumatic brain injury
iFedAvg – Interpretable Data-Interoperability for Federated Learning
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability [Supplementary]
Interpretable learning-to-defer for sequential decision-making
One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images [Supplementary]
Variable selection via the sum of single effect neural networks with credible sets
Morning Session II (Moderator: Xiaoxiao Li)
11:30 - 12:00 Invited talk #3 Jim Winkens, Abhijit Guha Roy (Title: Handling the long tail in medical imaging)
12:00 - 12:30 Invited talk #4 Le Lu (Title: In Search of Effective and Reproducible Clinical Imaging Biomarkers for Pancreatic Oncology Applications of Screening, Diagnosis and Prognosis)
Oral #2
12:30 - 12:35 One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images [Supplementary][Video] (Long paper)
12:35 - 12:40 Variable selection via the sum of single effect neural networks with credible sets [Video] (Long paper)
12:40 - 13:30 Lunch break
Afternoon Session I (Moderator: Yifan Peng)
13:30 - 14:00 Invited talk #5 Himabindu Lakkaraju (Title: Towards Robust and Reliable Model Explanations for Healthcare)
14:00 - 14:30 Invited talk #6 Frank Zhang and Olga Troyanskaya (Title: Automating deep learning to interpret human genomic variations)
Oral #3
14:30 - 14:35 An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images [Video] (Long paper)
14:35 - 14:40 Interactive Visual Explanations for Deep Drug Repurposing [Video] (Short paper) (Best Paper Award)
14:40 - 15:00 Coffee Break
Afternoon Session II (Moderator: Yuyin Zhou)
15:00 - 15:30 Invited talk #7 Fei Wang (Title: Practical Considerations of Model Interpretability in Clinical Medicine: Stability, Causality and Actionability)
15:30 - 16:00 Invited talk #8 Alan Yuille (Title: Toward Interpretable Health Care)
Oral #4
16:00 - 16:05 Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays [Video] (Short paper)
16:05 - 16:10 Fast Hierarchical Games for Image Explanations [Video] (Long paper)(Best Paper Award)
16:10 - 17:00 Posters II and coffee break (click to see the full list)
Optimizing Clinical Early Warning Models to Meet False Alarm Constraints
Uncertainty Quantification for Amniotic Fluid Segmentation and Volume Prediction
Evaluating subgroup disparity using epistemic for breast density assessment in mammography
Effective and Interpretable fMRI Analysis with Functional Brain Network Generation
Using a Cross-Task Grid of Linear Probes to Interpret CNN Model Predictions On Retinal Images}
Learning sparse symbolic policies for sepsis treatment
Novel disease detection using ensembles with regularized disagreement
A reject option for automated sleep stage scoring
An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images
Afternoon Session III (Moderator: Xiaoxiao Li)
17:00 - 17:30 Invited talk #9 Su-In Lee (Title: Explainable AI for Heathcare)
17:30 Closing remarks