2018 Workshop on Human Interpretability in Machine Learning (WHI)
July 14, 2018
Stockholm, Sweden
Overview
The Third Annual Workshop on Human Interpretability in Machine Learning (WHI 2018), held in conjunction with ICML 2018 and the Federated Artificial Intelligence Meeting, will bring together researchers who study the interpretability of predictive models, develop interpretable machine learning algorithms, and develop methodology to interpret black-box machine learning models. They will exchange ideas on these and allied topics, including:
- Quantifying and axiomatizing interpretability,
- Psychology of human concept learning,
- Rule learning,
- Symbolic regression,
- Case-based reasoning,
- Generalized additive models,
- Interpretation of black-box models (including deep neural networks),
- Causality of predictive models,
- Visual analytics, and
- Interpretability in reinforcement learning.
Location and Registration
The workshop will take place at Room A3, Stockholmsmässan, Stockholm, Sweden. Please consult the main ICML website or IJCAI-ECAI website for details on registration.
Workshop Proceedings
The accepted papers of the workshop may be found here: https://arxiv.org/html/1807.01308.
Previous Editions of the Workshop
Invited Talks
- Cynthia Rudin, Duke University [slides]
- Barbara Engelhardt, Princeton University
- Fernanda Viégas and Martin Wattenberg, Google Brain
Program
- 8:30 am Contributed Paper "Interpretable to Whom? A Role-Based Model for Analyzing Interpretable Machine Learning Systems" by Richard Tomsett, David Braines, Dan Harborne, Alun Preece, and Supriyo Chakraborty
- 8:40 am Contributed Paper "Generating Counterfactual Explanations with Natural Language" by Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata
- 8:50 am Contributed Paper "Why Interpretability in Machine Learning? An Answer Using Distributed Detection and Data Fusion Theory" by Kush R. Varshney, Prashant Khanduri, Pranay Sharma, Shan Zhang, and Pramod K. Varshney
- 9:00 am Invited Talk "Recent Algorithms for Interpretable Machine Learning" by Cynthia Rudin
- 10:00 am Coffee Break
- 10:30 am Contributed Paper "Deep Neural Decision Trees" by Yongxin Yang, Irene Garcia Morillo, and Timothy M. Hospedales
- 10:40 am Contributed Paper "Contrastive Explanations with Local Foil Trees" by Jasper van der Waa, Marcel Robeer, Jurriaan van Diggelen, Matthieu Brinkhuis, and Mark Neerincx
- 10:50 am Contributed Paper "Defining Locality for Surrogates in Post-Hoc Interpretablity" by Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, and Marcin Detyniecki
- 11:00 am Contributed Paper "AI in Education Needs Interpretable Machine Learning: Lessons from Open Learner Modelling" by Cristina Conati, Kaśka Porayska-Pomsta, and Manolis Mavrikis
- 11:10 am Contributed Paper "Instance-Level Explanations for Fraud Detection: A Case Study" by Dennis Collaris, Leo M. Vink, and Jarke J. van Wijk
- 11:30 am Invited Talk by Barbara Engelhardt
- 12:30 pm Lunch Break
- 2:00 pm Contributed Paper "Does Stated Accuracy Affect Trust in Machine Learning Algorithms?" by Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach
- 2:10 pm Contributed Paper "Visualization Techniques for LSTMs Applied to Electrocardiograms" by Jos van der Westhuizen and Joan Lasenby
- 2:20 pm Contributed Paper "Interpretable Discovery in Large Image Data Sets" by Kiri L. Wagstaff and Jake Lee
- 2:30 pm Invited Talk by Fernanda Viégas and Martin Wattenberg
- 3:30 pm Coffee Break
- 4:00 pm Contributed Paper "Maximally Invariant Data Perturbation as Explanation" by Satoshi Hara, Kouichi Ikeno, Tasuku Soma, and Takanori Maehara
- 4:10 pm Contributed Paper "Noise-Adding Methods of Saliency Map as Series of Higher Order Partial Derivative" by Junghoon Seo, Jeongyeol Choe, Jamyoung Koo, Seunghyeon Jeon, Beomsu Kim, and Taegyun Jeon
- 4:20 pm Contributed Paper "On the Robustness of Interpretability Methods" by David Alvarez-Melis and Tommi S. Jaakkola
- 4:30 pm Contributed Paper "Evaluating Feature Importance Estimates" by Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim
- 4:40 pm Contributed Paper "Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach" by Arthur Colombini Gusmão, Alvaro Henrique Chaim Correia, Glauber De Bona, and Fabio Cozman
- 4:50 pm Contributed Paper "Learning Qualitatively Diverse and Interpretable Rules for Classification" by Andrew Slavin Ross, Weiwei Pan, and Finale Doshi-Velez
- 5:10 pm Panel Discussion "Machine Learning that Matters Now" with panelists Hanna Wallach, Cynthia Rudin, and Martin Wattenberg
Call for Papers and Submission Instructions
We invite submissions of full papers as well as works-in-progress, position papers, and papers describing open problems and challenges. While original contributions are preferred, we also invite submissions of high-quality work that has recently been published in other venues or is concurrently submitted.
Papers should be 4-6 pages in length (excluding references and acknowledgements) formatted using the ICML template (in the blind non-accepted mode) and submitted online at https://cmt3.research.microsoft.com/WHI2018. We expect submissions to be 4 pages but will allow up to 6 pages.
Accepted papers will be selected for a short oral presentation or poster presentation. A non-archival proceedings will be created as an overlay on arXiv.
Key Dates
- Submission deadline: May 1, 2018
- Notification: May 22, 2018
- Workshop: July 14, 2018
Organizers
- Been Kim, Google Brain
- Kush R. Varshney, IBM Research AI
- Adrian Weller, University of Cambridge
Sponsors
We acknowledge generous support from