Saturday, July 23rd, 2022, 09:15 AM - 17:30 PM (Eastern Time)
Each poster is in the breakout room in the Zoom meeting.
The breakout room number is the corresponding poster number in the poster list
9:15 - 10:30 Morning Session I (Moderator: Yucheng Tang, Weina Jin)
09:15 - 09:30 Welcoming remarks and introduction by Zongwei Zhou
09:30 - 10:00 Invited talk #1 Cynthia Rudin (Title: Almost Matching Exactly for Interpretable Causal Inference) virtual
10:00 - 10:30 Invited talk #2 James Zou (Title: Machine learning to make clinical trials more efficient and diverse) onsite
10:30 - 10:40 Poster spotlight #1
10:30 - 10:35 Multiview Concept Bottleneck Models Applied to Diagnosing Pediatric Appendicitis
10:35 - 10:40 The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective
10:40 - 11:30 Posters I and coffee break (click to see the full list)
Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Data Sculpting: Interpretable Algorithm for End-to-End Cohort Selection
Reinforcement Learning For Survival : A Clinically Motivated Method For Critically Ill Patients
Challenges and Opportunities of Shapley values in a Clinical Context, [Appendix]
Self-explaining Neural Network with Concept-based Explanations for ICU Mortality Prediction
Interpretable Anomaly Detection in Echocardiograms with Dynamic Variational Trajectory Models
Outlier Detection using Self-Organizing Maps for Automated Blood Cell Analysis
Bayesian approaches for Quantifying Clinicians’ Variability in Medical Image Quantification
Multiview Concept Bottleneck Models Applied to Diagnosing Pediatric Appendicitis
Sparse Explanations for Gestational Age Prediction in Fetal Brain Ultrasound
The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective
Policy Optimization with Sparse Global Contrastive Explanations
PuPill - A Model For Identifying Medications From Images Of Packaging
Robust Generative Flows on Image Reconstruction with Uncertainty Quantification
Reinforcement Temporal Logic Rule Learning to Explain the Generating Processes of Events
Using Direct Error Predictors to Improve Model Safety and Interpretability
GGUN: Global Graph Understanding via Graph Neural Network Explanation for Identification of Influential Nodes in Healthcare, [Poster]
Uncertainty-Driven Counterfactual Explainers for CXR-Based Diagnosis Models
Towards a Unified Framework for Uncertainty-aware Nonlinear Variable Selection with Theoretical Guarantees, [Appendix]
Deep Learning Reveals Dynamic Signatures of Multiple Mental Disorders
CheXplaining in Style: Counterfactual Explanations for Chest X-rays using StyleGAN
Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning
Explanation of Machine Learning Models of Colon Cancer Using SHAP Considering Interaction Effects
11:30 - 12:30 Morning Session II (Moderator: Yucheng Tang, Xiaoxiao Li)
11:30 - 12:00 Invited talk #3 Rich Caruana (Title: Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning for Healthcare) virtual
12:00 - 12:30 Invited talk #4 Been Kim (Title: How to stop worry about interpretability, and start making progress) virtual
12:30 - 13:30 Lunch break
13:30 - 14:30 Afternoon Session I (Moderator: Yucheng Tang, Yuyin Zhou)
13:30 - 14:00 Invited talk #5 Elliot K Fishman, M.D. (Title: The Early Detection of Pancreatic Cancer: The Role of AI) virtual
14:00 - 14:30 Invited talk #6 Alan Yuille (Title: The Felix Project: Deep Networks To Detect Pancreatic Neoplasms) virtual
14:30 - 14:50 Poster spotlight #2
14:30 - 14:35 Policy Optimization with Sparse Global Contrastive Explanations
14:35 - 14:40 Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations
14:40 - 14:45 Brain Network Transformer
14:45 - 14:50 Learning Optimal Predictive Checklists
14:50 - 15:00 Coffee Break
15:00 - 16:00 Afternoon Session II (Moderator: Zongwei Zhou, Yifan Peng)
15:00 - 15:30 Invited talk #8 Noa Dagan, M.D. and Noam Barda, M.D. (Title: Model explainability - the perspective of implementing prediction models for patient care in a large healthcare organization) virtual
15:30 - 16:00 Invited talk #7 Atlas Wang (Title: “Free Knowledge” in Chest X-Rays: Contrastive Learning of Images and Their Radiomics) onsite
16:00 - 16:10 Poster spotlight #3
16:00 - 16:05 Reinforcement Learning For Survival: A Clinically Motivated Method For Critically Ill Patients
16:05 - 16:10 Towards a Unified Framework for Uncertainty-aware Nonlinear Variable Selection with Theoretical Guarantees, [Appendix]
16:10 - 16:15 Closing remarks by Zongwei Zhou
16:15 - 17:30 Posters II and coffee break (click to see the full list)
Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Data Sculpting: Interpretable Algorithm for End-to-End Cohort Selection
Reinforcement Learning For Survival : A Clinically Motivated Method For Critically Ill Patients
Challenges and Opportunities of Shapley values in a Clinical Context, [Appendix]
Self-explaining Neural Network with Concept-based Explanations for ICU Mortality Prediction
Interpretable Anomaly Detection in Echocardiograms with Dynamic Variational Trajectory Models
Outlier Detection using Self-Organizing Maps for Automated Blood Cell Analysis
Bayesian approaches for Quantifying Clinicians’ Variability in Medical Image Quantification
Multiview Concept Bottleneck Models Applied to Diagnosing Pediatric Appendicitis
Sparse Explanations for Gestational Age Prediction in Fetal Brain Ultrasound
The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective
Policy Optimization with Sparse Global Contrastive Explanations
PuPill - A Model For Identifying Medications From Images Of Packaging
Robust Generative Flows on Image Reconstruction with Uncertainty Quantification
Reinforcement Temporal Logic Rule Learning to Explain the Generating Processes of Events
Using Direct Error Predictors to Improve Model Safety and Interpretability
GGUN: Global Graph Understanding via Graph Neural Network Explanation for Identification of Influential Nodes in Healthcare, [Poster]
Uncertainty-Driven Counterfactual Explainers for CXR-Based Diagnosis Models
Towards a Unified Framework for Uncertainty-aware Nonlinear Variable Selection with Theoretical Guarantees, [Appendix]
Deep Learning Reveals Dynamic Signatures of Multiple Mental Disorders
CheXplaining in Style: Counterfactual Explanations for Chest X-rays using StyleGAN
Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning
Explanation of Machine Learning Models of Colon Cancer Using SHAP Considering Interaction Effects