Saturday, July 23rd, 2022, 09:15 AM - 17:30 PM (Eastern Time)

Each poster is in the breakout room in the Zoom meeting.

The breakout room number is the corresponding poster number in the poster list

9:15 - 10:30 Morning Session I (Moderator: Yucheng Tang, Weina Jin)

09:15 - 09:30 Welcoming remarks and introduction by Zongwei Zhou

09:30 - 10:00 Invited talk #1 Cynthia Rudin (Title: Almost Matching Exactly for Interpretable Causal Inference) virtual

10:00 - 10:30 Invited talk #2 James Zou (Title: Machine learning to make clinical trials more efficient and diverse) onsite

10:40 - 11:30 Posters I and coffee break (click to see the full list)

  1. Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

  2. Data Sculpting: Interpretable Algorithm for End-to-End Cohort Selection

  3. Reinforcement Learning For Survival : A Clinically Motivated Method For Critically Ill Patients

  4. Explaining AI for survival analysis: a median-SHAP approach

  5. Challenges and Opportunities of Shapley values in a Clinical Context, [Appendix]

  6. Self-explaining Neural Network with Concept-based Explanations for ICU Mortality Prediction

  7. Interpretable Anomaly Detection in Echocardiograms with Dynamic Variational Trajectory Models

  8. Outlier Detection using Self-Organizing Maps for Automated Blood Cell Analysis

  9. Bayesian approaches for Quantifying Clinicians’ Variability in Medical Image Quantification

  10. Analyzing the Effects of Handling Data Imbalance on Learned Features from Medical Images by Looking Into the Models

  11. Multiview Concept Bottleneck Models Applied to Diagnosing Pediatric Appendicitis

  12. Interpretable Deep Causal Learning for Moderation Effects

  13. Sparse Explanations for Gestational Age Prediction in Fetal Brain Ultrasound

  14. The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective

  15. Policy Optimization with Sparse Global Contrastive Explanations

  16. Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations

  17. Are Your Explanations Leaking Your Label?

  18. PuPill - A Model For Identifying Medications From Images Of Packaging

  19. Robust Generative Flows on Image Reconstruction with Uncertainty Quantification

  20. Reinforcement Temporal Logic Rule Learning to Explain the Generating Processes of Events

  21. Using Direct Error Predictors to Improve Model Safety and Interpretability

  22. GGUN: Global Graph Understanding via Graph Neural Network Explanation for Identification of Influential Nodes in Healthcare, [Poster]

  23. Supervised Training of Conditional Monge Maps

  24. Brain Network Transformer

  25. Encoding Domain Knowledge in Multi-view Latent Variable Models: A Bayesian Approach with Structured Sparsity

  26. Uncertainty-Driven Counterfactual Explainers for CXR-Based Diagnosis Models

  27. Towards a Unified Framework for Uncertainty-aware Nonlinear Variable Selection with Theoretical Guarantees, [Appendix]

  28. Deep Learning Reveals Dynamic Signatures of Multiple Mental Disorders

  29. CheXplaining in Style: Counterfactual Explanations for Chest X-rays using StyleGAN

  30. Robust Risk Prediction from Noisy Data

  31. Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning

  32. Learning Optimal Predictive Checklists

  33. Interpretable Model Drift Detection

  34. Interpretability by design using computer vision for behavioral sensing in child and adolescent psychiatry

  35. Explanation of Machine Learning Models of Colon Cancer Using SHAP Considering Interaction Effects

  36. Global explainability in spatially aligned image modalities

  37. Self-Supervised Learning of Echocardiogram Videos Enables Data-Efficient Diagnosis of Severe Aortic Stenosis

11:30 - 12:30 Morning Session II (Moderator: Yucheng Tang, Xiaoxiao Li)

11:30 - 12:00 Invited talk #3 Rich Caruana (Title: Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning for Healthcare) virtual

12:00 - 12:30 Invited talk #4 Been Kim (Title: How to stop worry about interpretability, and start making progress) virtual

12:30 - 13:30 Lunch break

13:30 - 14:30 Afternoon Session I (Moderator: Yucheng Tang, Yuyin Zhou)

13:30 - 14:00 Invited talk #5 Elliot K Fishman, M.D. (Title: The Early Detection of Pancreatic Cancer: The Role of AI) virtual

14:00 - 14:30 Invited talk #6 Alan Yuille (Title: The Felix Project: Deep Networks To Detect Pancreatic Neoplasms) virtual

14:50 - 15:00 Coffee Break

15:00 - 16:00 Afternoon Session II (Moderator: Zongwei Zhou, Yifan Peng)

15:00 - 15:30 Invited talk #8 Noa Dagan, M.D. and Noam Barda, M.D. (Title: Model explainability - the perspective of implementing prediction models for patient care in a large healthcare organization) virtual

15:30 - 16:00 Invited talk #7 Atlas Wang (Title: “Free Knowledge” in Chest X-Rays: Contrastive Learning of Images and Their Radiomics) onsite

16:10 - 16:15 Closing remarks by Zongwei Zhou

16:15 - 17:30 Posters II and coffee break (click to see the full list)

  1. Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

  2. Data Sculpting: Interpretable Algorithm for End-to-End Cohort Selection

  3. Reinforcement Learning For Survival : A Clinically Motivated Method For Critically Ill Patients

  4. Explaining AI for survival analysis: a median-SHAP approach

  5. Challenges and Opportunities of Shapley values in a Clinical Context, [Appendix]

  6. Self-explaining Neural Network with Concept-based Explanations for ICU Mortality Prediction

  7. Interpretable Anomaly Detection in Echocardiograms with Dynamic Variational Trajectory Models

  8. Outlier Detection using Self-Organizing Maps for Automated Blood Cell Analysis

  9. Bayesian approaches for Quantifying Clinicians’ Variability in Medical Image Quantification

  10. Analyzing the Effects of Handling Data Imbalance on Learned Features from Medical Images by Looking Into the Models

  11. Multiview Concept Bottleneck Models Applied to Diagnosing Pediatric Appendicitis

  12. Interpretable Deep Causal Learning for Moderation Effects

  13. Sparse Explanations for Gestational Age Prediction in Fetal Brain Ultrasound

  14. The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective

  15. Policy Optimization with Sparse Global Contrastive Explanations

  16. Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations

  17. Are Your Explanations Leaking Your Label?

  18. PuPill - A Model For Identifying Medications From Images Of Packaging

  19. Robust Generative Flows on Image Reconstruction with Uncertainty Quantification

  20. Reinforcement Temporal Logic Rule Learning to Explain the Generating Processes of Events

  21. Using Direct Error Predictors to Improve Model Safety and Interpretability

  22. GGUN: Global Graph Understanding via Graph Neural Network Explanation for Identification of Influential Nodes in Healthcare, [Poster]

  23. Supervised Training of Conditional Monge Maps

  24. Brain Network Transformer

  25. Encoding Domain Knowledge in Multi-view Latent Variable Models: A Bayesian Approach with Structured Sparsity

  26. Uncertainty-Driven Counterfactual Explainers for CXR-Based Diagnosis Models

  27. Towards a Unified Framework for Uncertainty-aware Nonlinear Variable Selection with Theoretical Guarantees, [Appendix]

  28. Deep Learning Reveals Dynamic Signatures of Multiple Mental Disorders

  29. CheXplaining in Style: Counterfactual Explanations for Chest X-rays using StyleGAN

  30. Robust Risk Prediction from Noisy Data

  31. Multiple Instance Learning via Iterative Self-Paced Supervised Contrastive Learning

  32. Learning Optimal Predictive Checklists

  33. Interpretable Model Drift Detection

  34. Interpretability by design using computer vision for behavioral sensing in child and adolescent psychiatry

  35. Explanation of Machine Learning Models of Colon Cancer Using SHAP Considering Interaction Effects

  36. Global explainability in spatially aligned image modalities

  37. Self-Supervised Learning of Echocardiogram Videos Enables Data-Efficient Diagnosis of Severe Aortic Stenosis