BMVC 2019 Workshop

Interpretable & Explainable Machine Vision

September 12th 2019 at

Workshop Proceedings:

Workshop Programme

NOTE: the workshop will be held in Room E1.21, Sir Martin Evans Building.

14:00-14:10 Welcome

14:10-15:00 Panel: Industrial Perspective - from Theoretical Advances to Practical Techniques

Mark Hall (Airbus)

Sriram Varadarajan (BAE Systems)

Richard Tomsett (IBM Research)

15:00-15:45 Paper session 1 - Interpretability Techniques for CNNs

How to Make CNNs Lie: the GradCAM Case Tom J Viering, Ziqi Wang, Marco Loog, Elmar Eisemann (Delft University of Technology & University of Copenhagen)

Gradient Weighted Superpixels for Interpretability in CNNs Thomas Hartley, Kirill Sidorov, Chris Willis & David Marshall (Cardiff University & BAE Systems)

15:45-16:15 Coffee Break / BMVC Poster Session

16:15-17:45 Paper session 2 - Explainability & Visualisation Techniques

Explanation based Handwriting Verification Mihir H Chauhan, Mohammad Abuzar Shaikh, Sargur Srihari (University at Buffalo)

Investigating Convolutional Neural Networks using Spatial Orderness Rohan Ghosh, Anupam Gupta (National University of Singapore)

Explainable Deep Learning for Video Recognition Tasks: a Framework and Recommendations Liam Hiley, Alun Preece, Yulia Hicks (Cardiff University)

Illuminated Decision Trees with Lucid Richard J Tomsett, David Mott (IBM Research)

17:45 Workshop ends

About the Workshop

Recent years have seen significant advances in techniques for image processing and machine vision based on breakthroughs in machine learning and artificial intelligence, especially in the area of deep neural networks. However, such techniques are widely viewed as creating “black box” systems that are in some senses “inscrutable”, leading to concerns over their reliability, stability, and trustworthiness. Consequently, we have seen a surge of interest in approaches aimed at “opening the black boxes” commonly characterised by the terms interpretability and explainability.


Topics of interest include (but are not limited to):

  • Improving the theoretical basis of interpretability and explainability techniques
  • Practical interpretability and explainability techniques for system developers
  • Case studies of applied interpretability and explainability approaches
  • Visualisation techniques for network layer representations
  • Evaluation metrics for interpretability
  • Psychological and human-in-the-loop perspectives on interpretability and explainability
  • Approaches aimed at assuring fairness, accountability and transparency
  • Interpretable model architectures
  • Explaining and interpreting uncertainty


We welcome either full papers (up to 9 pages) or short papers (4 pages) that we will select either for presentation or as posters. All submissions should follow the BMVC conference style. We particularly encourage PhD student-led submissions.

Submit your papers via the CMT submission site. [CLOSED]


Submission (long and short papers): Monday July 15th 2019 [EXTENDED]

Acceptance notification: Monday July 29th 2019

Final camera-ready versions: Monday August 19th 2019

Organising Committee

  • Alun Preece (Cardiff University, UK) - primary contact
  • Supriyo Chakraborty (IBM Research, USA)
  • Lewis Griffin (UCL)
  • Mark Hall (Airbus, UK)
  • Simon Julier (UCL)
  • Richard Tomsett (IBM Research, UK)
  • Sriram Varadarajan (BAE Systems, UK)
  • Chris Willis (BAE Systems, UK)