2019 Workshop on Human In the Loop Learning (HILL)

June 14, 2019

Long Beach, California, USA


This workshop is a joint effort between the 4th ICML Workshop on Human Interpretability in Machine Learning (WHI) and the ICML 2019 Workshop on Interactive Data Analysis System (IDAS). We have combined our forces this year to run Human in the Loop Learning (HILL) in conjunction with ICML 2019!

The workshop will bring together researchers and practitioners who study interpretable and interactive learning systems with applications in large scale data processing, data annotations, data visualization, human-assisted data integration, systems and tools to interpret machine learning models as well as algorithm designs for active learning, online learning, and interpretable machine learning algorithms. The target audience for the workshop includes people who are interested in using machines to solve problems by having a human be an integral part of the process. This workshop serves as a platform where researchers can discuss approaches that bridge the gap between humans and machines and get the best of both worlds.

We welcome high-quality submissions in the broad area of human in the loop learning. A few (non-exhaustive) topics of interest include:

  • Systems for online and interactive learning algorithms,
  • Active/Interactive machine learning algorithm design,
  • Systems for collecting, preparing, and managing machine learning data,
  • Model understanding tools (verifying, diagnosing, debugging, visualization, introspection, etc),
  • Design, testing and assessment of interactive systems for data analytics,
  • Psychology of human concept learning,
  • Generalized additive models, sparsity and rule learning,
  • Interpretable unsupervised models (clustering, topic models, etc.),
  • Interpretation of black-box models (including deep neural networks),
  • Interpretability in reinforcement learning.

Location and Registration

The workshop will take place at Room 103 of the Long Beach Convention Center, Long Beach, California. Please consult the main ICML website for registration information.


  • 8:25 am Opening Remarks
  • 8:30 am Interactive Data Analysis Invited Talk: James Philbin
  • 9:00 am Interactive Data Analysis Invited Talk: Sanja Fidler
  • 9:30 am Interactive Data Analysis Invited Talk: Bryan Catanzaro
  • 10:00 am Interactive Data Analysis Poster Session (see list of accepted papers below) and Coffee Break
  • 11:30 am Interactive Data Analysis Invited Talk: Yisong Yue
  • 12:00 pm Interactive Data Analysis Invited Talk: Vittorio Ferrari
  • 12:30 pm Lunch Break
  • 2:02 pm Interpretability Contributed Talk: Creation of User Friendly Datasets: Insights from a Case Study concerning Explanations of Loan Denials by Ajay Chander and Ramya M. Srinivasan
  • 2:09 pm Interpretability Contributed Talk: Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning by Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, and Aleksandra Mojsilović
  • 2:16 pm Interpretability Contributed Talk: Extracting Interpretable Concept-based Decision Trees from CNNs by Conner Chyung, Michael Y. Tsang, and Yan Liu
  • 2:23 pm Interpretability Contributed Talk: Concept Tree: High-Level Representation of Variables for More Interpretable Surrogate Decision Trees by Xavier Renard, Jonathan Aigrain, Nicolas Woloszko, and Marcin Detyniecki
  • 2:30 pm Interpretability Contributed Talk: Interpretable Sequence Classification via Sparse Discriminative Multiple Instance Learning by Fulton Wang and Ali Pinar
  • 2:37 pm Interpretability Contributed Talk: Regularizing Black-box Models for Improved Interpretability by Gregory Plumb, Maruan Al-Shedivat, Eric Xing, and Ameet Talwalkar
  • 2:44 pm Interpretability Contributed Talk: Counterfactual Visual Explanations by Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee
  • 2:53 pm Interpretability Contributed Talk: Issues with Post-Hoc Counterfactual Explanations: A Discussion by Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, and Marcin Detyniecki
  • 3:00 pm Coffee Break
  • 3:30 pm Interpretability Invited Discussion: California's Senate Bill 10 (SB 10) on Pretrial Release and Detention with Anna Bethke, Joshua Kroll, and Peter Eckersley, moderated by Adrian Weller
  • 4:45 pm Human in the Loop Learning Panel Discussion with Vittorio Ferrari, Marcin Detyniecki, and James Philbin, moderated by Xin Wang and Kush Varshney

Accepted Interactive Data Analysis Papers (presented in poster session)

  • A Case for Backward Compatibility for Human-AI Teams by Gagan Bansal, Besmira Nushi, Ece Kamar, Dan Weld, Walter Lasecki, and Eric Horvitz
  • Active Learning for Improving Decision-Making from Imbalanced Data by Iiris Sundin, Peter Schulam, Eero Siivola, Aki Vehtari, Suchi Saria, and Samuel Kaski
  • Aggregation of Pairwise Comparisons with Reduction of Biases by Nadezhda Bugakova, Valentina Fedorova, Gleb Gusev, and Alexey Drutsa
  • Bayesian Active Learning With Abstention Feedbacks by Cuong V. Nguyen, Lam Si Tung Ho, Huan Xu, Thien Vu Dinh Cao Duy, and Thanh Binh Nguyen
  • Crowdsourcing in the Absence of Ground Truth---A Case Study by Ramya M. Srinivasan and Ajay Chander
  • Diameter-based interactive structure search by Christopher Tosh and Daniel Hsu
  • Doubly Robust Crowdsourcing by Chong Liu and Yu-Xiang Wang
  • Enumeration of Distinct Support Vectors for Interactive Decision Making by Kentaro Kanamori, Satoshi Hara, Masakazu Ishihata, Hiroki Arimura
  • Few-Shot Point Cloud Region Annotation with Human in the Loop by Sowmya Munukutla and Siddhant Jain
  • Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild by Abubakar Abid, James Zou, Ali Abdalla, Ali Abid, and Dawood Khan
  • Hierarchical Annotation of Images with Two-Alternative-Forced-Choice Metric Learning by Niels Hellinga and Vlado Menkovski
  • HYPE: Human eYe Perceptual Evaluation of Generative Models by Sharon Zhou, Mitchell L. Gordon, Ranjay Krishna, Austin Narcomey, Durim Morina, and Michael S. Bernstein
  • Interactive topic modeling with anchor words by Stefanos Poulis, Christopher Tosh, and Sanjoy Dasgupta
  • Knowledge-augmented Column Networks: Guiding Deep Learning with Advice by Mayukh Das, Devendra Singh Dhami, Yang Yu, Gautam Kunapuli, and Sriraam Natarajan
  • Sampling Humans for Optimizing Preferences in Coloring Artwork by Mike Mccourt and Ian Dewancker
  • Teaching Tasks as Unbounded Temporal Logic Formulas by Guan Wang, Carl Trimbach, Jun Ki Lee, Mark Ho, and Michael L. Littman
  • The Practical Challenges of Active Learning by Jean-François Kagy, Afshin Rostamizadeh, Tolga Kayadelen, Ji Ma, and Jana Strnadova
  • Towards an IDE for agent design by Matthew Rahtz, James Fang, Dylan Hadfield-Menell, and Anca Dragan
  • Towards Interactive Training of Non-Player Characters in Video Games by Igor Borovikov, Jesse Harder, Michael Sadovsky, and Ahmad Beirami
  • Towards Learning Generative Models of Behavior with Human Expertise by Vaibhav V. Unhelkar and Julie A. Shah

Call for Papers and Submission Instructions

We invite submissions of full papers as well as works-in-progress, position papers, and papers describing open problems and challenges. While original contributions are preferred, we also invite submissions of high-quality work that has recently been published in other venues or is concurrently submitted.

Papers should be 4-6 pages in length (excluding references and acknowledgements) formatted using the ICML template (in the blind non-accepted mode) and submitted online at https://cmt3.research.microsoft.com/HILL2019. We expect submissions to be 4 pages but will allow up to 6 pages.

Accepted papers will be selected for a short oral presentation or poster presentation. A non-archival proceedings will be created as an overlay on arXiv.

For the poster size, the recommended dimension is 6 ft x 4 ft landscape.

Key Dates

  • Submission deadline: April 28, 2019 (anywhere on earth)
  • Notification: May 15, 2019
  • Workshop: June 14, 2019

Previous Editions of the Workshop

In previous years, we have run this workshop as the Workshop on Human Interpretability (WHI).


Interpretability Side

Interaction Side