Friday, July 23rd, 09:15 AM - 17:30 PM (Eastern Time)

Click the 'Join IMLH' bottom to join the workshop hosted on Zoom and GatherTown.

Morning Session I (Moderator: Vicky Yao)

09:15 - 09:30 Welcoming remarks and introduction (Dr. Yuyin Zhou)

09:30 - 10:00 Invited talk #1 Mihaela van der Schaar (Title: Quantitative epistemology: conceiving a new human-machine partnership)

10:00 - 10:30 Invited talk #2 Archana Venkataraman (Title: Integrating Convolutional Neural Networks and Probabilistic Graphical Models for Epileptic Seizure Detection and Localization)

10:40 - 11:30 Posters I and coffee break (click to see the full list)

MACDA: Counterfactual Explanation with Multi-Agent Reinforcement Learning for Drug Target Prediction

Causal Graph Recovery for Sepsis-Associated Derangements via Interpretable Hawkes Networks [Supplementary]

Learning Robust Hierarchical Patterns of Human Brain across Many fMRI Studies [Supplementary]

Quantifying Explainability in Healthcare NLP and Analyzing Algorithms for Performance-Explainability Tradeoff

Using Associative Classification and Odds Ratios for In-Hospital Mortality Risk Estimation

Reinforcement Learning for Workflow Recognition in Surgical Videos

TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation

Tree-based local explanations of machine learning model predictions – AraucanaXAI

Counterfactual Explanations in Sequential Decision Making Under Uncertainty

Transfer Learning with Real-World Nonverbal Vocalizations from Minimally Speaking Individuals [Supplementary]

Enhancing interpretability and reducing uncertainties in deep learning of electrocardiograms using a sub-waveform representation

Online structural kernel selection for mobile health

Hierarchical Analysis of Visual COVID-19 Features from Chest Radiographs

Towards Privacy-preserving Explanations in Medical Image Analysis

Prediction of intracranial hypertension in patients with severe traumatic brain injury

iFedAvg – Interpretable Data-Interoperability for Federated Learning

Reliable Post hoc Explanations: Modeling Uncertainty in Explainability [Supplementary]

Interpretable learning-to-defer for sequential decision-making

One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images [Supplementary]

Variable selection via the sum of single effect neural networks with credible sets

Morning Session II (Moderator: Xiaoxiao Li)

11:30 - 12:00 Invited talk #3 Jim Winkens, Abhijit Guha Roy (Title: Handling the long tail in medical imaging)

12:00 - 12:30 Invited talk #4 Le Lu (Title: In Search of Effective and Reproducible Clinical Imaging Biomarkers for Pancreatic Oncology Applications of Screening, Diagnosis and Prognosis)

12:40 - 13:30 Lunch break

Afternoon Session I (Moderator: Yifan Peng)

13:30 - 14:00 Invited talk #5 Himabindu Lakkaraju (Title: Towards Robust and Reliable Model Explanations for Healthcare)

14:00 - 14:30 Invited talk #6 Frank Zhang and Olga Troyanskaya (Title: Automating deep learning to interpret human genomic variations)

14:40 - 15:00 Coffee Break

Afternoon Session II (Moderator: Yuyin Zhou)

15:00 - 15:30 Invited talk #7 Fei Wang (Title: Practical Considerations of Model Interpretability in Clinical Medicine: Stability, Causality and Actionability)

15:30 - 16:00 Invited talk #8 Alan Yuille (Title: Toward Interpretable Health Care)

16:10 - 17:00 Posters II and coffee break (click to see the full list)

Personalized and Reliable Decision Sets: Enhancing Interpretability in Clinical Decision Support Systems

Solving inverse problems with deep neural networks driven by sparse signal decomposition in a physics-based dictionary

Optimizing Clinical Early Warning Models to Meet False Alarm Constraints

Identifying cell type-specific chemokine correlates with hierarchical signal extraction from single-cell transcriptomes

Uncertainty Quantification for Amniotic Fluid Segmentation and Volume Prediction

Evaluating subgroup disparity using epistemic for breast density assessment in mammography

BrainNNExplainer: An Interpretable Graph Neural Network Framework for Brain Network based Disease Analysis

Effective and Interpretable fMRI Analysis with Functional Brain Network Generation

Using a Cross-Task Grid of Linear Probes to Interpret CNN Model Predictions On Retinal Images}

Learning sparse symbolic policies for sepsis treatment

Novel disease detection using ensembles with regularized disagreement

A reject option for automated sleep stage scoring

Assessing Bias in Medical AI

Enabling risk-aware Reinforcement Learning for medical interventions through uncertainty decomposition

Do You See What I See? A Comparison of Radiologist Eye Gaze to Computer Vision Saliency Maps for Chest X-ray Classification

An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images

Interactive Visual Explanations for Deep Drug Repurposing

Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays

Fast Hierarchical Games for Image Explanations

Afternoon Session III (Moderator: Xiaoxiao Li)

17:00 - 17:30 Invited talk #9 Su-In Lee (Title: Explainable AI for Heathcare)

17:30 Closing remarks