2nd Workshop on

Interactive Natural Language Technology for Explainable Artificial Intelligence

In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data.

The focus of this workshop is on the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. NL technologies, both NL Generation (NLG) and NL Processing (NLP) techniques, are expected to enhance knowledge extraction and representation through human-machine interaction (HMI). As remarked in the last challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), "even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans". Accordingly, users without a strong background on AI, require a new generation of Explainable AI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made. The ultimate goal is building trustworthy AI that is beneficial to people through fairness, transparency and explainability. To achieve it, not only technical but also ethical and legal issues must be carefully considered.

The virtual workshop will be held as part of the International Conference on Natural Language Generation (INLG2020), which is supported by the Special Interest Group on NLG of the Association for Computational Linguistics. INLG 2020 is organized by the Dublin City University, DCU, in Dublin Ireland, 15-18 December, 2020. Due to covid-19 the entire event will be fully online in a virtual format. Our workshop will take place by December 18.

This is the second of a series of workshops to be organized in the next years in the context of the European project NL4XAI.

Aims and Scope

This is the second edition of the NL4XAI workshop which was first held at INLG2019, as a step ahead of the workshop 2IS&NLG that we co-organized with Mariët Theune at INLG2018. The workshop follows the line started with the workshop XCI at INLG2017. Moreover, this workshop follows a series of thematic special sessions in international conferences as follows:

The aim of this workshop is to provide a forum to disseminate and discuss recent advances on Explainable AI. We are mainly interested in attracting early-stage researchers and practitioners who desire having feedback on their work in progress and mentoring of senior researchers in the area of Explainable AI.

We envision a full-day workshop with 2 invited speakers (to be defined), an oral session with 4 short presentations of accepted papers, followed by a poster session (including demos) where workshop attendants can informally interact among them, and one closing session in the form of a round table plus open discussion slot.

As a result, we expect to identify challenges and explore potential transfer opportunities between related fields, generating synergy and symbiotic collaborations in the context of Explainable AI, HMI, Argumentation and Language Generation. Moreover, we expect to strengthen the network of researchers and practitioners interested in taking NLG further to enable the next generation of Explainable AI systems.

How to participate

We solicit researchers for contributions dealing with NLG issues in relationship with any of the many aspects concerned with Explainable AI systems.

It will be possible to submit regular papers (up to 4 pages + 1 references) and demo papers (up to 2 pages). Papers should follow the ACL paper format. The contributions will be subject to a blind peer review process to assess their relevance and originality for the workshop. Accepted contributions will be the primary input source for the workshop and authors will be requested to present their contributions in either a poster or presentation, considering the most suitable format in each case.

Early stage researchers are encouraged to take part in this workshop. In addition, senior researchers as well as non-academic participants from the industry are very welcome to share their valuable experiences, preferably in the form of demo papers.

This is the second of a series of workshops to be organized in the context of the H2020 MSCA ITN NL4XAI project (Grant Agreement No 860621). The NL4XAI project trains 11 creative, entrepreneurial and innovative ESRs, who face the challenge of making AI self-explanatory and thus contributing to translate knowledge into products and services for economic and social benefit, with the support of XAI systems. The project consortium consists of 10 beneficiaries and 8 partner organizations. Participants in NL4XAI are encouraged to take part in NL4XAI2020.

Contributions will be compiled in companion proceedings to be published in ACL Anthology.

Submissions should be made through Easychair here.

Topics

  • Definitions and Theoretical Issues on Explainable AI

  • Interpretable Models versus Explainable AI systems

  • Explaining black-box models

  • Explaining Bayes Networks

  • Explaining Fuzzy Systems

  • Explaining Logical Formulas

  • Multi-modal Semantic Grounding and Model Transparency

  • Explainable Models for Text Production

  • Verbalizing Knowledge Bases

  • Models for Explainable Recommendations

  • Interpretable Machine Learning

  • Self-explanatory Decision-Support Systems

  • Explainable Agents

  • Argumentation Theory for Explainable AI

  • Natural Language Generation for Explainable AI

  • Interpretable Human-Machine Multi-modal Interaction

  • Metrics for Explainability Evaluation

  • Usability of Explainable AI/interfaces

  • Applications of Explainable AI Systems

Important dates

  • Submissions due: September 20, 2020 October 15, 2020

  • Notification of acceptance: October 20, 2020 November 15, 2020

  • Camera-ready papers due: November 20, 2020 December 8, 2020

  • Workshop session: 18 December, 2020 (UPDATED!)

Program Committee (to be expanded as soon as confirmed)

  • Alberto Bugarin, CiTIUS, University of Santiago de Compostela (Spain)

  • Katarzyna Budzynska, Warsaw University of Technology (WUT) (Poland)

  • Claire Gardent, CNRS/LORIA, Nancy (France)

  • Pablo Gamallo, CiTIUS, University of Santiago de Compostela (Spain)

  • Marcin Koszowy, Warsaw University of Technology (WUT) (Poland)

  • Simon Mille, Universitat Pompeu Fabra (Spain)

  • Nir Oren, University of Aberdeen (UK)

  • Martı́n Pereira-Fariña, University of Santiago de Compostela (Spain)

  • Ehud Reiter, University of Aberdeen, Arria NLG plc. (UK)

  • Carles Sierra, Institute of Research on Artificial Intelligence (IIIA), Spanish National Research Council (CSIC) (Spain)

  • Mariët Theune, Human Media Interaction, University of Twente (The Netherlands)

Organizers and contact

José M. Alonso

Research Centre in Intelligent Technologies

(Centro Singular de Investigacion en Tecnoloxias Intelixentes, CiTIUS)

University of Santiago de Compostela, Spain

Alejandro Catala

Research Centre in Intelligent Technologies

(Centro Singular de Investigacion en Tecnoloxias Intelixentes, CiTIUS)

University of Santiago de Compostela, Spain

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860621.