Making Models We Can Understand: 

An Interactive Introduction to Interpretable Machine Learning

Tutorial on Interpretable ML at 10th International Conference on Computational Social Science, Philadelphia

Description

In many areas of social science, we would like to use machine learning models to make better decisions. However, many machine learning models are opaque or ``black-box,'' meaning that they do not explain their predictions in a way that humans can understand. This lack of transparency is problematic, leading to questions of possible model biases, and unclear accountability for incorrect decisions. Interpretable or "glass box" machine learning models give insight into model decisions and can be used to create more fair and accurate models. Interpretability in machine learning is crucial for high stakes decisions and troubleshooting. Interpretable machine learning started as far back as the 1970's, but has gained momentum as a subfield only very recently. We will overview recent research in the area, provide fundamental principles for interpretable machine learning, and provide hands-on activities highlighting the use of the techniques on real world data. 

This tutorial will introduce the frontier of interpretable machine learning, and equip researchers and scientists with the knowledge and skills to apply interpretable machine learning in their research tasks for effective data analysis and responsible decision-making.

 Tutorial Outline

IC2S2 Interpretable ML Tutorial

Tutorial Speakers

Alina Jade Barnett is a postdoctoral research associate at Duke University. She researches interpretable machine learning with applications in clinical medicine. In her research, she creates tools that help clinicians better diagnose patients, making expert-level performance available in medical settings without those experts. Her work has been featured in NeurIPS spotlight (top 3% of accepted papers), CVPR (IEEE/CVF Computer Vision and Pattern Recognition Conference), Nature Machine Intelligence, and SPIE Medical Imaging. She received funding from the Duke Incubation Fund for interdisciplinary work on interpretable mammogram analysis. 

Chudi Zhong is an assistant professor in School of Data Science and Society and Department of Statistics and Operations Research at UNC-Chapel Hill.  Her research lies at the intersection of machine learning, optimization, and human-model interaction. She develops interpretable machine learning algorithms and pipelines to facilitate high-stakes decision-making. She was recognized as a Rising Star in Data Science and won second place in the 2023 Bell Labs Prize. 

Harsh Parikh is a Postdoctoral Fellow at the Johns Hopkins Bloomberg School of Public Health, specializing in developing cutting-edge causal inference methodologies that are accurate, trustworthy, and sensitive to specific domain requirements. Harsh is committed to bridging the research-to-practice gap by collaborating with leading experts in various fields. He earned his Ph.D. from Duke University's Department of Computer Science, where he worked on machine learning-aided causal inference, earning the prestigious Amazon Graduate Fellowship for his work. His dissertation was recognized with an Outstanding Dissertation award.