2020 Workshop on Human Interpretability in Machine Learning (WHI)

July 17, 2020

starting at 10:30 am CEST

Virtual Workshop

Overview

The Fifth Annual Workshop on Human Interpretability in Machine Learning (WHI 2020), held in conjunction with ICML 2020, will bring together artificial intelligence (AI) researchers who study the interpretability of AI systems, develop interpretable machine learning algorithms, and develop methodologies to interpret black-box machine learning models (e.g., post-hoc interpretations). This is a very exciting time to study interpretable machine learning, as the advances in large-scale optimization and Bayesian inference that have enabled the rise of black-box machine learning are now also starting to be exploited to develop principled approaches to large-scale interpretable machine learning. Interpretability also forms a key bridge between machine learning and other AI research directions such as machine reasoning and planning.

This year we will have a special focus on Interpretability in Practice, encouraging submissions discussing practical applications of interpretability algorithms, requirements, and tools for specific stakeholders (e.g., lawyers, policymakers, finance experts, medical professionals). Participants in the workshop will exchange ideas on these and allied topics, including:

  • Quantifying and axiomatizing interpretability

  • Psychology of human concept learning

  • Rule learning, symbolic regression, and case-based reasoning

  • Generalized additive models, sparsity, and interpretability

  • Interpretation of black-box models (including deep neural networks)

  • Interpretable unsupervised models (clustering, topic models, etc.)

  • Causality analysis of predictive models

  • Verifying, diagnosing, and debugging machine learning systems

  • Interpretability in reinforcement learning

  • Visual analytics of model innards

  • Real-world experiences of deploying interpretability at scale

  • Transparent models for auditability

  • Interdisciplinary work regarding transparency at scale

Proceedings and Previous Editions of the Workshop

This year's proceedings are available here.

Previous editions and their proceedings:

Schedule, all times Central Europe Time

10:15 - 10:30: Opening Remarks

10:30 - 10:45: Contributed Talk - High Dimensional Model Explanations: an Axiomatic Approach

10:45 - 11:00: Contributed Talk - Algorithmic recourse under imperfect causal knowledge: a probabilistic approach

11:00 - 11:30: Sandra Wachter (Oxford) - Fair, explainable and accountable AI in Europe: When Law meets Computer Science

11:30 - 12:30: Group A Spotlights

12:30 - 14:00: Break

14:00 - 14:30: Finale Doshi-Velez (Harvard SEAS) - Intuitive and Interpretable Representation Learning

14:30 - 14:45: Contributed Talk - On the Privacy Risks of Model Explanations

14:45 - 15:00: Contributed Talk - The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons

15:00 - 15:30: Donald Rubin (Tsinghua) - Are there differences between estimating causal effects, causal inference, and causal understanding?

15:30 - 16:30: Group B Spotlights

16:30 - 17:00: Mason Kortz (Harvard Law) - Interpretability and Accountability Under the Law

17:00 - 17:45: Interpretability Panel

Camera-Ready and Video Preparation Instructions

If your paper has been accepted as an oral, spotlight, or poster, please use this LaTeX template to prepare your final document. Please take reviewer comments into account in the final version. Include all appendices along with the main paper in the same pdf file. Upload the pdf to CMT through the camera-ready submission that is now available.

If your paper has been accepted as a spotlight or poster, please also prepare a 3 minute video describing your work. Use an mp4 format without any specialized codecs. Upload the video file to CMT as the second file in the camera-ready submission section. Alternatively, you may post the video to a publicly-accessible location on YouTube or other similar site and upload a text file containing the URL.

Non-archival proceedings will be hosted on a mini-conf site being used for a virtual experience for the workshop. We may create an overlay index on arXiv eventually.

Code of Conduct

The workshop adheres to the ACM Policy Against Harassment and the ICML Code of Conduct.

Location, Registration, and Coronavirus

In light of the SARS-CoV-2/COVID-19 pandemic, the main ICML conference and all associated events including this workshop are planned to take place fully virtually. Please consult the main ICML website for details on registration and logistics. This workshop will be cast via a Zoom meeting whose link will be posted here. The digital content will be hosted through mini-conf.

Confirmed Invited Talks

  • Donald B. Rubin, Department of Statistics, Tsinghua University

  • Finale Doshi-Velez, Paulson School of Engineering and Applied Sciences, Harvard University

  • Mason Kortz, Berkman Klein Center for Internet & Society, Harvard Law School

  • Sandra Wachter, Oxford Internet Institute, University of Oxford

Accepted Papers

Oral presentations

On the Privacy Risks of Model Explanations. Reza Shokri (National University of Singapore); Martin Strobel (National University of Singapore); Yair Zick (National University of Singapore)

High Dimensional Model Explanations: an Axiomatic Approach. Neel B Patel (National University of Singapore); Martin R Strobel (National University of Singapore); Yair Zick (National University of Singapore)

Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. Amir-Hossein Karimi (MPI for Intelligent Systems, Tübingen)*; Julius von Kügelgen (MPI for Intelligent Systems, Tübingen & University of Cambridge); Bernhard Schölkopf (MPI for Intelligent Systems, Tübingen); Isabel Valera (MPI for Intelligent Systems)

The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons. Solon Barocas (Cornell University)*; Andrew Selbst (UCLA School of Law); Manish Raghavan (Cornell University)

Spotlight presentations

Group A

Neural Additive Models: Interpretable Machine Learning with Neural Nets. Rishabh Agarwal (Google Research, Brain Team)*; Nicholas Frosst (Google); Xuezhou Zhang (University of Wisconsin-Madison); Rich Caruana (Microsoft Research); Geoffrey Hinton (Google)

Machine Learning Explainability for External Stakeholders. Umang Bhatt (University of Cambridge)*; McKane Andrus (Partnership on AI); Adrian Weller (University of Cambridge); Alice Xiang (Partnership on AI)

Pattern-Guided Integrated Gradients. Robert Schwarzenberg (German Research Center for Artificial Intelligence (DFKI))*; Steffen Castle (German Research Center for Artificial Intelligence (DFKI))

Intriguing generalization and simplicity of adversarially trained neural networks. Chirag Agarwal (UIC); Peijie Chen (Auburn University); Anh Nguyen (Auburn University)*

A simple defense against adversarial attacks on heatmap explanations. Laura S Rieger (DTU)*; Lars Kai Hansen (Technical University of Denmark)

--- 5 mins Q+A

Sequential Explanations with Mental Model-Based Policies. Arnold YS Yeung (University of Toronto)*; Shalmali Joshi (Vector Institute); Joseph J Williams (University of Toronto); Frank Rudzicz (St Michael's Hospital; University of Toronto)

True to the Model or True to the Data? Hugh Chen (University of Washington); Joseph D Janizek (University of Washington)*; Scott Lundberg (Microsoft Research); Su-In Lee (University of Washington)

Transparent Interpretation with Knockout. Xing Han (The University of Texas at Austin)*; Yihao Feng (UT Austin); Na Zhang (Tsinghua University); Qiang Liu (UT Austin)

Human-in-the-Loop Learning of Interpretable and Intuitive Representations. Isaac Lage (Harvard)*; Finale Doshi-Velez (Harvard)

An Empirical Study of the Trade-Offs Between Interpretability and Fairness. Shahin Jabbari (Harvard University)*; Han-Ching Ou (Harvard University); Himabindu Lakkaraju (Harvard); Milind Tambe (Harvard University)

--- 5 mins Q+A

Visualizing Classification Structure of Large-Scale Classifiers. Bilal Alsallakh (Facebook)*; Zhixin Yan (BOSCH Research North America ); Shabnam Ghaffarzadegan (BOSCH Research North America); Zeng Dai (BOSCH Research North America ); Liu Ren (BOSCH Research North America)

Explaining Suspected Phishing Attempts with Document Anchors. Kilian Kluge (University of Ulm)*; Regina Eckhardt (University of Ulm)

Rethinking Positive Aggregation and Propagation of Gradients in Gradient-based Saliency Methods. Ashkan Khakzar (Technical University of Munich)*; Soroosh Baselizadeh (Technical University of Munich); Nassir Navab ("TU Munich, Germany")

Algorithmic Recourse: from Counterfactual Explanations to Interventions. Amir-Hossein Karimi (MPI for Intelligent Systems, Tübingen)*; Bernhard Schölkopf (MPI for Intelligent Systems, Tübingen); Isabel Valera (MPI for Intelligent Systems, Tübingen)

Are Neural Nets Modular? Inspecting Their Functionality Through Differentiable Weight Masks. Róbert Csordás (IDSIA)*; Sjoerd van Steenkiste (IDSIA); Jürgen Schmidhuber (IDSIA - Lugano)

Group B

PointMask: Towards Interpretable and Bias-Resilient Point Cloud Processing. Saeid Asgari Taghanaki (Autodesk)*; Kaveh Hassani (Autodesk); Pradeep Kumar Jayaraman (Autodesk); Amir Hosein Khasahmadi (University of Toronto); Tonya Custis (Autodesk)

Object Localisation using Perturbations on the Perceptual Ball. Andrew Elliott (Alan Turing Insitute)*; Stephen Law (alan turing institute); Chris Russell (The Alan Turing Institute/University of Surrey)

Shapley Residuals: Quantifying the limits of the Shapley value for explanations. I. Elizabeth Kumar (University of Utah)*; Carlos Scheidegger (The University of Arizona); Suresh Venkatasubramanian (University of Utah, USA); Sorelle Friedler (Haverford College)

Learning Relevant Explanations. Chris Russell (The Alan Turing Institute/University of Surrey)*; Rory Mc Grath (Accenture Labs); Luca Costabello (Accenture Labs)

Vizarel: A System to Help Better Understand RL Agents. Shuby V Deshpande (Carnegie Mellon University)*; Jeff Schneider

--- 5 mins Q+A

Concept Bottleneck Models. Pang Wei Koh (Stanford University)*; Thao Nguyen (Google); Yew Siang Tang (Stanford University); Stephen O Mussmann (Stanford University); Emma Pierson (Stanford); Been Kim (Google); Percy Liang (Stanford University)

Scalable learning of interpretable rules for the dynamic microbiome domain. Venkata Suhas Maringanti (University of Massachusetts Dartmouth)*; Vanni Bucci (University of Massachussetts Medical School); Georg Gerber (Harvard Medical School)

Discovering Invariances in Neural Networks. Taha Bahadori (Amazon)*; Layne Price (Amazon)

Investigating Saturation Effects in Integrated Gradients. Vivek Miglani (Facebook)*; Narine Kokhlikyan (Facebook); Bilal Alsallakh (Facebook); Miguel Martin (Facebook); Orion Reblitz-Richardson (Facebook)

Explaining Deep Neural Networks using Unsupervised Clustering. Yu-Han Liu (Google); Sercan O. Arik (Google)*

--- 5 mins Q+A

Does the Whole Exceed its Parts? The Effect of Explanations on Complementary Team Performance (Extended Abstract). Gagan Bansal (University of Washington)*; Tongshuang Wu (University of Washington); Joyce Zhou (University of Washington); Raymond Fok (University of Washington); Besmira Nushi (Microsoft Research); Ece Kamar (Microsoft Research); Marco Ribeiro (Microsoft Research); Daniel Weld (University of Washington)

Data Staining: A Method for Comparing Faithfulness of Explainers. Jacob D Sippy (University of Washington)*; Gagan Bansal (University of Washington); Daniel Weld (University of Washington)

Investigating Bias in Image Classification using Model Explanations. Schrasing Tong (MIT)*; Lalana Kagal (MIT)

SEA-NN: Submodular Ensembled Attribution for Neural Networks. Piyushi Manupriya (IIT Hyderabad)*; Saketha Nath Jagarlapudi (IIT Hyderabad); Vineeth N Balasubramanian (Indian Institute of Technology, Hyderabad)

Poster presentations

Interpretable Insights about Medical Image Datasets: Using Wavelets and Spectral Methods. Roozbeh Yousefzadeh (Yale University)*; Furong Huang (University of Maryland)

Time Series Interpretability Using Temporal Fusion Transformers. Bryan Lim (University of Oxford)*; Sercan O. Arik (Google); Nicolas Loeff (Google); Tomas Pfister (Google)

Robust Semantic Interpretability: Revisiting Concept Activation Vectors. Jacob Pfau (UCSF)*; Albert Young (UCSF); Jerome Wei (UC Berkeley); Maria Wei (UCSF); Michael J Keiser (University of California, San Francisco)

Are Input-Gradients Meaningful for Interpretability? Suraj Srinivas (Idiap Research Institute & EPFL)*; François Fleuret (University of Geneva)

Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness. Michael Hind (IBM Research); Dennis Wei (IBM Research)*; Yunfeng Zhang (IBM Research)

Estimating Example Difficulty using Variance of Gradients. Chirag Agarwal (UIC)*; Sara Hooker (Google)

Debugging Tests for Model Explanations. Julius Adebayo (MIT)*; Michael Muelly (Stanford University); Ilaria Liccardi (MIT); Been Kim (Google)

Interpreting Spatially Infinite Generative Models. Chaochao Lu (University of Cambridge)*; Richard E. Turner (University of Cambridge and Microsoft Research); Li Yingzhen (Microsoft Research Cambridge); Nate Kushman (Microsoft Research)

XAI Methods for Time Series Classification: A Brief Review. Ilija Simic (KNOW-CENTER GmbH)*; Vedran Sabol (KNOW-CENTER GmbH); Eduard Veas

Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs. Matthew L Leavitt (Facebook AI Research)*; Ari S Morcos (Facebook AI Research (FAIR))

Towards Ground Truth Explainability on Tabular Data. Brian Barr (Capital One - Center for Machine Learning)*

Optimizing AI for Teamwork. Gagan Bansal (University of Washington)*; Besmira Nushi (Microsoft Research); Ece Kamar (Microsoft Research); Eric Horvitz (MSR); Daniel Weld (University of Washington)

Characterizing and Mitigating Bias in Compact Models. Sara Hooker (Google)*; Nyalleng Moorosi (Google); Gregory Clark (Google); Samy Bengio (Google Research, Brain Team); Emily Denton (Google)

CRUDS: Counterfactual Recourse Using Disentangled Subspaces. Michael S Downs (Harvard University)*; Jonathan Chu (Harvard University); Yaniv Yacoby (Harvard University); Finale Doshi-Velez (Harvard); Weiwei Pan (Harvard University)

Visually Exploring Contrastive Explanation for Diagnostic Risk Prediction on Electronic Health Records. Bum Chul Kwon (IBM Research)*; Prithwish Chakraborty (IBM Research); James Codella (IBM Research); Amit Dhurandhar (IBM Research); Daby Sow (IBM Research); Kenney Ng (IBM Research)

siVAE: interpreting latent dimensions within variational autoencoders. Yongin Choi (University of California, Davis)*; Gerald Quon (University of California, Davis)

Explaining Creative Artifacts. Lav Varshney (UIUC: ECE)*; Nazneen Fatema Rajani (Salesforce Research)

Visualizing Transfer Learning. Róbert Szabó (Eötvös Loránd University); Dániel Katona (Budapest University of Technology and Economics); Márton Csillag (Eötvös Loránd University); Adrián Csiszárik (Alfréd Rényi Institute of Mathematics); Dániel Varga (Alfréd Rényi Institute of Mathematics)*

Organizers

Virtual Experience Chairs

  • Hendrik Strobelt, IBM Research AI

  • Pratik Mukherjee, IBM Research AI

Program Committee

Alan Chan Alayna Kennedy Amir Feder Amir-Hossein Karimi Ana Lucic Andrei Margeloiu Andrej Švec

Andrew Ross Andrew Elliott Ashkan Khakzar Asma Ghandeharioun Berk Ustun Bhanukiran Vinzamuri

Bilal Alsallakh Botty Dimanov Brian Barr Bryan Lim Bum Chul Kwon Chirag Agarwal Chris Russell

Christin Seifert Daby Sow Dániel Lévai Dom Huh Emrah Akyol Fatemeh Mireshghallah

Gagan Bansal Gintare Karolina Dziugaite Grégoire Montavon Hubert Baniecki I. Elizabeth Kumar Isaac Lage

Jacob Pfau Joseph Janizek Julius Adebayo Jun Yuan Jun Yuan Kacper Sokol

Karthikeyan Natesan Ramamurthy Kilian Kluge Kim de Bie Laura Rieger Lav Varshney Lisa Schut

Luna Zhang Manjari Narayan Martin Strobel Matthew Leavitt Narayanan C Krishnan Omkar Kumbhar

Pang Wei Koh Robert Schwarzenberg Róbert Csordás Ronny Luss Roozbeh Yousefzadeh Saeid Asgari Taghanaki

Sahib Singh Sara Hooker Schrasing Tong Scott Lundberg Sercan Arik Shuby Deshpande Shweta Jain

Solon Barocas Stephen Law Suraj Srinivas Taha Bahadori Thibault Laugel Thomas Merritt-Smith Tong Wang

Ute Schmid Venkata Suhas Maringanti Vivek Miglani Weiwei Pan Xiao Liu Yahav Bechavod Yair Zick

Yunfeng Zhang Zhifeng Kong

Key Dates

  • Submission deadline: June 22, 2020 (11:59 pm AoE)

  • Notification: July 8, 2020

  • Workshop: July 17, 2020

Call for Papers and Submission Instructions

We invite submissions of full papers as well as works-in-progress, position papers, and papers describing open problems and challenges. While original contributions are preferred, we also invite submissions of high-quality work that has recently been published in other venues or is concurrently submitted.

Papers should be 4-6 pages in length (excluding references and acknowledgments) formatted using the ICML template (in the blind non-accepted mode) and submitted online at https://cmt3.research.microsoft.com/WHI2020. We expect submissions to be 4 pages but will allow up to 6 pages. Accepted papers will be selected for a short oral presentation.