Friday, July 28th, 2023, 09:15 AM - 17:00 PM (Hawaii Standard Time, GMT-10)

9:15 - 10:30   Morning Session I

09:15 - 09:30 Welcoming remarks and introduction

09:30 - 10:00 Invited talk #1 Pallavi Tiwari (in person)

10:00 - 10:30 Invited talk #2 Jimeng Sun (in person)

10:40 - 11:10  Posters I and coffee break

11:10 - 12:40   Morning Session II

11:10 - 11:40 Invited talk #3  Rajesh Ranganath (in person) Title: Have we learned to explain?

Interpretability enriches what can be gleaned from a good predictive model. Techniques that learn-to-explain have arisen because they require only a single evaluation of a model to provide an interpretation. I will discuss a flaw with several methods that learn-to-explain: the optimal explainer makes the prediction rather than highlighting the inputs that are useful for prediction, and I will discuss how to correct this flaw.  Along the way, I will develop evaluations grounded in the data and convey why interpretability techniques need to be quantitatively evaluated before their use.


References:

11:40 - 12:10 Invited talk #4  Quanzheng Li (in person)

12:10 - 12:40 Invited talk #5  Himabindu Lakkaraju (virtual)

12:40 - 13:30  Lunch break

13:30 - 15:00    Afternoon Session I


13:30 - 14:00 Invited talk #6 Irene Chen (virtual) Title: Building Equitable Algorithms: Modeling Access to Healthcare in Disease Phenotyping

Advances in machine learning and the explosion of clinical data have demonstrated immense potential to fundamentally improve clinical care and deepen our understanding of human health. However, algorithms for medical interventions and scientific discovery in heterogeneous patient populations are particularly challenged by the complexities of healthcare data. Not only are clinical data noisy, missing, and irregularly sampled, but questions of equity and fairness also raise grave concerns and create additional computational challenges. In this talk, I examine how to incorporate differences in access to care into the modeling step. Using a deep generative model, we examine the task of disease phenotyping in heart failure and Parkinson's disease. The talk concludes with a discussion about how to rethink the entire machine learning pipeline with an ethical lens to building algorithms that serve the entire patient population.

14:00 - 14:30 Invited talk #7  Alex Lang (in person) Title: How to Get Over Your Black Box Trust Issues

Only the bravest machine learners have dared to tackle problems in medicine. Why? The most important reason is that the end users of ML models in medicine are skeptics of ML, and therefore one must jump through a multitude of hoops in order to deploy ML solutions. The common approach in the field is to focus on interpretability and force our ML solutions to be white box. However, this handcuffs the potential of our ML models from the start, and medicine is already a challenging enough space to model since data is hard to collect, the data one gets is always messy, and the tasks one must achieve in medicine are often not as intuitive as working on images or text. 

Is there another way? Yes! Our approach is to embrace black box ML solutions, but deploy them carefully in clinical trials by rigorously controlling the risk exposure from trusting the ML solutions. I will use Alzheimer’s disease as an example to dive into our state of the art deep time series neural networks. Once I have explained our black box as best as a human reasonably can, I will detail how the outputs of the deep nets can be used in different clinical trials. In these applications, the end user prespecifies their risk tolerance, which leads to different context of use for the ML models. Our work demonstrates that we can embrace black box solutions by focusing on development rigorous deployment methods.

14:30 - 15:00 Invited talk #Cihang Xie (in person)

15:00 - 15:20    Coffee Break

15:20 - 15:50    Afternoon Session II


15:20 - 15:50 Invited talk #9 Judy Wawira Gichoya, MD (virtual)  Title: Harnessing the ability of AI models to detect hidden signals - how can we explain these findings?

Deep learning models have been demonstrated to have superhuman performance for prediction of features that are not obvious to the human readers. For example, AI can predict the self-reported race of patients, age, sex, diagnosis and insurance of patients. While some of these features are biological, most are social constructs, and given the black box nature of models it remains difficult to assess how this ability is achieved. In this session, we will review some of the approaches that are both technical and non-technical in understanding the performance of these models which has an impact on real world deployment of AI. 

16:25 - 16:35     Closing remarks

16:35 - 17:00     Posters II and coffee break