Format

Presentations Sessions and Contributed Talks

At the time we send the acceptance notifications, we will indicate that authors are required to present their work live in a virtual meeting. It is very important that authors give live presentations because this motivates informal discussions, which is one of the aims of the workshop.

The authors of the selected papers will be invited to give a 12 or 15 minutes presentation and have 3 to 5 minutes for Q&A.

Invited talks

We will have four invited talks, at the start of each day and after the long break.

Panels

We are including two panels with invited guests in the proposal, where the particular schedule will depend on the attendees’ time zone. The first panel will have participants that comment on lessons learned and insights gained from applied XAI in the industry while the second panel will focus on XAI from a cognitive science perspective.

Breaks and Social

We will plan long breaks, so attendees have time to take time away but also time to socialize with other attendees. When using Zoom, we can schedule random breakouts that successfully emulates the experience of creating opportunities to meet new people.

Access the virtual workshop here - https://virtual.2021.aaai.org/workshop_WS-11.html

Invited Talks

Ofra Amir

Professor of Industrial Engineering and Management

Timothy Miller

Professor of Computing and Information Systems

Margaret Burnett

Professor of Computer Science


Pat Langley

Professor of Computer Science

Slides

Panels

Panel I


Freddy Lecue

Scientist at CortAI, and Research associate at Inria

Slides

Vera Liao

Research Staff Member Thomas J. Watson Research Center

Slides

Been Kim

Staff research scientist at Google Brain


Panel II


Eric Vorm

Aerospace Experimental Psychologist at the US Naval Research Laboratory

Bertram F. Malle

Professor of Cognitive, Linguistic and Psychological Sciences

Denise Agosto

Professor and Director of the MS in Information Program

Schedule

Day 1 (Feb 8th)

  • 15h00 GMT (07h00 PST) Welcome [Madumal & Tulli]

  • 15h15 GMT (07h15 PST) Invited Talk I Amir [Tulli]

  • 15h55 GMT (07h55 PST) - Short Break

  • 16h00 GMT (08h00 PST) Presentations I [Weber]

    • 16h00 GMT (08h00 PST) Predicting Illness for a Sustainable Dairy Agriculture: Predicting and Explaining the Onset of Mastitis in Dairy Cows - Cathal Ryan, Christophe Guéret, Donagh Berry, Medb Corcoran, Mark T. Keane and Brian Mac Namee.

    • 16h15 GMT (08h15 PST) Counterfactual Generation and Fairness Evaluation Using Adversarially Learned Inference - Saloni Dash and Amit Sharma.

    • 16h30 GMT (08h30 PST) Generating Fast Counterfactual Explanations for Black-box Models Using Reinforcement Learning - Sahil Verma, Keegan Hines and John Dickerson.

    • 16h45 GMT (08h45 PST) Towards interpretability of Mixtures of Hidden Markov Models - Negar Safinianaini and Henrik Boström.

    • 17h00 GMT (09h00 PST) Machine-annotated Rationales: Faithfully Explaining Text Classification - Floris Bex.

    • 17h15 GMT (09h15 PST) A Symbolic Approach to Generating Contrastive Explanations for Black Box Classifiers - Lorenzo Malandri, Fabio Mercorio, Mario Mezzanzanica, Navid Nobani and Andrea Seveso.

    • 17h30 GMT (09h30 PST) Towards Explainable MCTS - Hendrik Baier and Michael Kaisers

    • 17h45 GMT (09h45 PST) xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI Systems - Vineel Nagisetty, Laura Graves, Joseph Scott and Vijay Ganesh.

  • 18h00 GMT (10h00 PST) - Virtual Networking Session

  • 18h10 GMT (10h10 PST) - Lunch Break

  • 18h45 GMT (10h45 PST) - Invited Talk II Miller [Madumal]

  • 19h25 GMT (11h25 PST) - Contributed Talks I [Aha]

    • 19h25 GMT (11h25 PST) On Baselines for Local Feature Attributions - Johannes Haug, Stefan Zürn, Peter El-Jiz and Gjergji Kasneci.

    • 19h45 GMT (11h45 PST) Contextual Importance and Utility in R: the 'ciu' Package - Kary Främling.

  • 20h25 GMT (12h25 PST) - Contributed Talks II [Madumal]

    • 20h25 GMT (12h25 PST) Bandits for Learning to Explain from Explanations - Freya Behrens, Stefano Teso and Davide Mottin.

    • 20h45 GMT (12h45 PST) Improving VQA and its Explanations by Comparing Competing Explanations - Jialin Wu, Raymond Mooney and Liyan Chen.

    • 21h05 GMT (13h05 PST) Asking the Right Questions: Active Action-Model Learning - Pulkit Verma, Shashank Rao Marpally and Siddharth Srivastava.

    • 21h25 GMT (13h25 PST) Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomaly Detection - Nirmal Sobha Kartha, Clément Gautrais and Vincent Vercruyssen.

  • 21h45 GMT (13h45 PST) - Long Break

  • 22h05 GMT (14h05 PST) - Panel I Lecue, Liao, Been Kim - [Weber & Madumal]

  • 23h35 GMT (15h35 PST) - Closing Remarks [Tulli & Madumal]

  • 23h40 GMT (15h40 PST) - End of Day 1



Day 2 (Feb 9th)

  • 15h00 GMT (07h00 PST) - Opening Remarks [Weber]

  • 15h15 GMT (07h15 PST) - Invited Talk III - Margarett Burnett [Weber]

  • 15h55 GMT (07h55 PST) - Short Break

  • 16h00 GMT (08h00 PST) - Presentations II [Tulli]

    • 16h00 GMT (08h00 PST) Benchmarking Perturbation-based Saliency Maps for Explaining Deep Reinforcement Learning Agents - Tobias Huber, Benedikt Limmer and Elisabeth Andre.

    • 16h15 GMT (08h15 PST) Effects of Uncertainty on the Quality of Feature Importance Estimates - Torgyn Shaikhina, Umang Bhatt, Roxanne Zhang, Konstantinos Georgatzis, Alice Xiang and Adrian Weller.

    • 16h30 GMT (08h30 PST) The Struggles and Subjectivity of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets - Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz and Phil Blunsom.

    • 16h45 GMT (08h45 PST) Explanation: what does it mean for humans, for machines, for man-machine interactions? - Alain Mille, Rémy Chaput and Amélie Cordier.

    • 17h00 GMT (09h00 PST) Explanation From Specification - Gyorgy Turan and Harish Naik

    • 17h15 GMT (09h15 PST) Principles of Explanation in Human-AI Systems - Shane Mueller, Elizabeth Veinott, Robert Hoffman, Gary Klein, Lamia Alam, Tauseef Mamun and William Clancey.

    • 17h30 GMT (09h30 PST) The Interpretable Dictionary in Sparse Coding - Edward Kim, Connor Onweller, Andrew O'Brien and Kathleen McCoy.

  • 18h00 GMT (10h00 PST) - Virtual Networking Session

  • 18h10 GMT (10h10 PST) - Lunch Break

  • 18h45 GMT (10h45 PST) - Invited Talk IV - Pat Langley [Aha]

  • 19h25 GMT (11h25 PST) - Long Break

  • 20h25 GMT (12h25 PST) - Contributed Talks III [Aha]

    • 20h25 GMT (12h25 PST) Opportunities and Challenges of Explainable Case-Based Reasoning - Jakob Michael Schoenborn, Rosina O. Weber, David Aha, Jörg Cassens and Klaus-Dieter Althoff.

    • 20h45 GMT (12h45 PST) Qualitative Investigation in Explainable Artificial Intelligence: A Bit More Insight from Social Science - Adam Johs, Denise Agosto and Rosina Weber.

    • 21h05 GMT (13h05 PST) Evaluation and Comparison of CNN Visual Explanations for Histopathology - Mara Graziani, Thomas Lompech, Henning Mueller and Vincent Andrearczyk

    • 21h25 GMT (13h25 PST) GANMEX: One-vs-One Attributions Guided by GAN-based Counterfactual Baselines - Sheng-Min Shih, Pin-Ju Tien and Zohar Karnin.

  • 21h45 GMT (13h45 PST) - Long Break

  • 22h05 GMT (14h05 PST) - Panel II - Vorm, Malle, Agosto [Aha & Tulli]

  • 23h35 GMT (15h35 PST) - Closing Remarks [Weber & Aha]

  • 23h40 GMT (15h40 PST) - End of Day 2


Accepted Papers

  • Shane Mueller, Elizabeth Veinott, Robert Hoffman, Gary Klein, Lamia Alam, Tauseef Mamun and William Clancey. Principles of Explanation in Human-AI Systems

  • Johannes Haug, Stefan Zürn, Peter El-Jiz and Gjergji Kasneci. On Baselines for Local Feature Attributions

  • Kary Främling. Contextual Importance and Utility in R: the 'ciu' Package

  • Negar Safinianaini and Henrik Boström. Towards interpretability of Mixtures of Hidden Markov Models

  • Sahil Verma, Keegan Hines and John Dickerson. Generating Fast Counterfactual Explanations for Black-box Models UsingReinforcement Learning

  • Pulkit Verma, Shashank Rao Marpally and Siddharth Srivastava. Asking the Right Questions: Active Action-Model Learning

  • Jialin Wu, Raymond Mooney and Liyan Chen. Improving VQA and its Explanations by Comparing Competing Explanations

  • Torgyn Shaikhina, Umang Bhatt, Roxanne Zhang, Konstantinos Georgatzis, Alice Xiang and Adrian Weller. Effects of Uncertainty on the Quality of Feature Importance Estimates

  • Tobias Huber, Benedikt Limmer and Elisabeth Andre. Benchmarking Perturbation-based Saliency Maps for Explaining Deep Reinforcement Learning Agents

  • Edward Kim, Connor Onweller, Andrew O'Brien and Kathleen McCoy. The Interpretable Dictionary in Sparse Coding

  • Vineel Nagisetty, Laura Graves, Joseph Scott and Vijay Ganesh. xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI Systems

  • Cathal Ryan, Christophe Guéret, Donagh Berry, Medb Corcoran, Mark T. Keane and Brian Mac Namee. Predicting Illness for a Sustainable Dairy Agriculture: Predicting and Explaining the Onset of Mastitis in Dairy Cows

  • Elize Herrewijnen, Dong Nguyen, Jelte Mense and Floris Bex. Machine-annotated Rationales: Faithfully Explaining Text Classification

  • Jakob Michael Schoenborn, Rosina O. Weber, David Aha, Jörg Cassens and Klaus-Dieter Althoff. Opportunities and Challenges of Explainable Case-Based Reasoning

  • Lorenzo Malandri, Fabio Mercorio, Mario Mezzanzanica, Navid Nobani and Andrea Seveso. A Symbolic Approach to Generating Contrastive Explanations for Black Box Classifiers

  • Gyorgy Turan and Harish Naik. Explanation From Specification

  • Sushant Agarwal. Trade-Offs between Fairness and Interpretability in Machine Learning

  • Alain Mille, Rémy Chaput and Amélie Cordier. Explanation: what does it mean for humans, for machines, for man-machine interactions ?

  • Sheng-Min Shih, Pin-Ju Tien and Zohar Karnin. GANMEX: One-vs-One Attributions Guided by GAN-based Counterfactual Baselines

  • Adam Johs, Denise Agosto and Rosina Weber. Qualitative Investigation in Explainable Artificial Intelligence: A Bit More Insight from Social Science

  • Nirmal Sobha Kartha, Clément Gautrais and Vincent Vercruyssen. Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomaly Detection

  • Saloni Dash and Amit Sharma. Counterfactual Generation and Fairness Evaluation Using Adversarially Learned Inference

  • Hendrik Baier and Michael Kaisers. Towards Explainable MCTS

  • Mara Graziani, Thomas Lompech, Henning Mueller and Vincent Andrearczyk. Evaluation and Comparison of CNN Visual Explanations for Histopathology

  • Freya Behrens, Stefano Teso and Davide Mottin. Bandits for Learning to Explain from Explanations


Workshop sponsored by

J.P. Morgan