Location and Registration
The workshop will take place in Room C4.8 at the International Convention Centre, Sydney, Australia. Please consult the main ICML website for details on registration.
The accepted papers of the workshop may be found here: https://arxiv.org/html/1708.02666.
- 8:30 A. Dhurandhar, V. Iyengar, R. Luss, and K. Shanmugam, "A Formal Framework to Characterize Interpretability of Procedures" [paper][presentation][reviews]
- 8:45 A. Henelius, K. Puolamäki, and A. Ukkonen, "Interpreting Classifiers through Attribute Interactions in Datasets" [paper][presentation][reviews]
- 9:00 S. Lundberg and S.-I. Lee, "Consistent Feature Attribution for Tree Ensembles" [paper][presentation][reviews]
- 9:15 Invited Talk: D. Sontag [presentation]
- 10:00 Coffee Break
- 10:30 S. Penkov and S. Ramamoorthy, "Program Induction to Interpret Transition Systems" [paper][presentation][reviews] (best paper second runner-up)
- 10:45 R. L. Phillips, K. H. Chang, and S. Friedler, "Interpretable Active Learning" [paper][presentation][reviews]
- 11:00 C. Rosenbaum, T. Gao, and T. Klinger, "e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations" [paper][presentation][reviews]
- 11:15 Invited Talk: T. Jebara, "Interpretability Through Causality" [presentation]
- 12:00 Lunch Break
- 14:00 W. Tansey, J. Thomason, and J. G. Scott, "Interpretable Low-Dimensional Regression via Data-Adaptive Smoothing" [paper][presentation][reviews]
- 14:15 Invited Talk: P. W. Koh [presentation]
- 15:00 Coffee Break
- 15:30 I. Valera, M. F. Pradier, and Z. Ghahramani, "General Latent Feature Modeling for Data Exploration Tasks" [paper][presentation][reviews] (best paper award)
- 15:45 A. Weller, "Challenges for Transparency" [paper][presentation][reviews] (best paper runner-up)
- 16:00 Awards Ceremony
- 16:05 Panel Discussion: "Human Interpretability in Machine Learning" with panelists: Tony Jebara, Been Kim, Bernhard Schölkopf, and Kush Varshney; moderated by Adrian Weller
- Tony Jebara, Columbia University and Netflix
- Interpretability Through Causality
- While interpretability often involves finding more parsimonious or sparser models to facilitate human understanding, Netflix also seeks to achieve human interpretability by pursuing causal learning. Predictive models can be impressively accurate in a passive setting but might disappoint a human user who expects the recovered relationships to be causal. More importantly, a predictive model's outcomes may no longer be accurate if the input variables are perturbed through an active intervention. I will briefly discuss applications at Netflix across messaging, marketing and originals promotion which leverage causal modeling in order to achieve models that can be actionable as well as interpretable. In particular, techniques such as two stage least squares (2SLS), instrumental variables (IV), extensions to generalized linear models (GLMs), and other causal methods will be summarized. These causal models can surprisingly recover more interpretable and simpler models than their purely predictive counterparts. Furthermore, sparsity can potentially emerge when causal models ignore spurious relationships that might otherwise be recovered in a purely predictive objective function. In general, causal models achieve better results algorithmically in active intervention settings and enjoy broader adoption from human stakeholders.
- Pang Wei Koh, Stanford University
- David Sontag, Massachusetts Institute of Technology
Previous Editions of the Workshop
Call for Papers and Submission Instructions
We invite submissions of full papers as well as works-in-progress, position papers, and papers describing open problems and challenges. While original contributions are preferred, we also invite submissions of high-quality work that has recently been published in other venues or is concurrently submitted.
Papers should be 4-6 pages in length (excluding references and acknowledgements) formatted using the ICML template and submitted online at this link. We expect submissions to be 4 pages but will allow up to 6 pages. The review process will be open and thus the submissions need not be anonymized.
Accepted papers will be selected for a short oral presentation or poster presentation.
- Submission deadline: June 16, 2017
- Notification: June 30, 2017
- Workshop: August 10, 2017