Interpretability: Methodologies and Algorithms (IMA2019)

First International Workshop

2 December 2019

Adelaide, Australia

Important Dates

Manuscript submission

Paper submission: 28th October 2019

Notification: 11th November 2019

Camera-ready version: 18th November 2019

Submission site:

Please, submit your paper via EasyChair Workshop submission site.

Submission format

For the preparation of their papers, authors are required to follow LCNS Springer’s authors’ guidelines and use respective format templates for Latex or Word. Users of Overleaf may use Springer's LCNS template there. Springer encourages authors to include their ORCIDs in their papers.

Submissions must be in PDF only. There are no limits on page numbers.

Workshop Proceedings

Peer-reviewed papers, accepted for presentation at the workshop will be published in the workshop proceedings through Cornell University’s open access repository and made available at the workshop.

Revised version of the contributions will be published as a special issue with related high profile journal or as an edited book with Springer or Kluwer.

Aims and Scope

The First Annual International Workshop on Interpretability: Methodologies and Algorithms (IMA 2019), in conjunction with AI 2019 and AusDM 2019, will provide a joint industry, government and academia forum for presentation and discussion of the latest ideas, research and practical developments and methodologies that address the challenges of interpretability and comprehensibility in machine learning (ML) and broader artificial intelligence (AI). The workshop aims to connect experts in the area of explainable AI, experts in interpretability of machine learning algorithms and experts in data science project methodologies.

The major topics include but are not limited to:

  • The concepts of interpretability, comprehensibility and explainability in machine learning and broader data-driven algorithmic decision making;
  • Degrees of interpretability and respective interpretability features;
  • Methodologies supporting interpretability/comprehensibility in data science projects;
  • Interpretability/comprehensibility as core part of user experience;
  • Interactive and iterative methods supporting interpretability and comprehensibility;
  • Design of interpretable models;
  • Interpretability methods for ‘black-box’ type of machine learning models;
  • Impact of data characteristics on the solution interpretability;
  • Data preprocessing and its effect on interpretability;
  • Interpretability issues across text, image, audio and video data;
  • Interpretability and accuracy;
  • Local and global explainability techniques of AI/ML models;
  • Practical aspects of achieving ML/AI solution interpretability in industry settings;
  • Transparency in machine learning and data-driven decision algorithms;
  • Design of symbolic and visual analytics means for support of interpretability;
  • Psychological and cultural aspects of interpretability/comprehensibility;
  • Causality in predictive modelling and interpretability of causal relationships.

Workshop organisation

Workshop Chairs

Inna Kolyshkina, Analytikk Consulting.

Simeon Simoff, Western Sydney University

Program committee

Shlomo Berkovsky, Australian Institute of Health Innovation, Macquarie University

Volker Gruhn, Lehrstuhl für Software Engineering, Universität Duisburg-Essen

Warwick Graco, Operational Analytics, Australian Taxation Office

Helen Chen, Professional Practice Centre for Health Systems, University of Waterloo

Jerzy Korczak, Wroclaw University of Economics

Reza Abbasi-Asl, University of California, San Fransisco

Riccardo Guidotti, KDD Lab, ISTI-CNR and University of Pisa

Cengiz Oztireli, Disney Research and ETH Zürich.

Przemyslaw Biecek, Faculty of Mathematics, Informatics and Mechanics, University of Warsaw

Jake Hofman, Microsoft Research

Workshop sponsors