2018 Workshop on Human Interpretability in Machine Learning (WHI)
July 14-15, 2018
The Third Annual Workshop on Human Interpretability in Machine Learning (WHI 2018), held in conjunction with ICML 2018 and the Federated Artificial Intelligence Meeting, will bring together researchers who study the interpretability of predictive models, develop interpretable machine learning algorithms, and develop methodology to interpret black-box machine learning models. They will exchange ideas on these and allied topics, including:
- Quantifying and axiomatizing interpretability,
- Psychology of human concept learning,
- Rule learning,
- Symbolic regression,
- Case-based reasoning,
- Generalized additive models,
- Interpretation of black-box models (including deep neural networks),
- Causality of predictive models,
- Visual analytics, and
- Interpretability in reinforcement learning.
Location and Registration
Previous Editions of the Workshop
Call for Papers and Submission Instructions
We invite submissions of full papers as well as works-in-progress, position papers, and papers describing open problems and challenges. While original contributions are preferred, we also invite submissions of high-quality work that has recently been published in other venues or is concurrently submitted.
Papers should be 4-6 pages in length (excluding references and acknowledgements) formatted using the ICML template (in the blind non-accepted mode) and submitted online at https://cmt3.research.microsoft.com/WHI2018. We expect submissions to be 4 pages but will allow up to 6 pages.
Accepted papers will be selected for a short oral presentation or poster presentation. A non-archival proceedings will be created as an overlay on arXiv.
- Submission deadline: May 1, 2018
- Notification: May 22, 2018
- Workshop: July 14-15, 2018
- Cynthia Rudin, Duke University
- Barbara Engelhardt, Princeton University
- Fernanda Viégas and Martin Wattenberg, Google Brain