Special Session

Advances on eXplainable Artificial Intelligence

Organized by: Jose M. Alonso, Ciro Castiello, Corrado Mencar, Luis Magdalena

Venue: IEEE-WCCI 2022 conference, Padova, Italy, July 18-23, 2022

This Special Session is supported by the IEEE-CIS Task Force on Explainable Fuzzy Systems

The Conference Program is online (CEST)

Session: FUZZ-SS-18 Advances on eXplainable Artificial Intelligence I

Thursday, July 21, 15:00-16:40, Room: Mantegna SA1 , Chair: Jose M. Alonso

15:00 - Focus! Rating XAI Methods and Finding Biases [#663]

Authors: Anna Arias-Duart (Barcelona Supercomputing Center); Ferran Parés (Barcelona Supercomputing Center); Dario Garcia-Gasulla (Barcelona Supercomputing Center (BSC)); Victor Gimenez-Abalos (Barcelona Supercomputing Center)

Speaker: Anna Arias-Duart

15:20 - On measuring features importance in Machine Learning models in a two-dimensional representation scenario [#991]

Authors: Inmaculada Gutiérrez García-Pardo (Universidad Complutense de Madrid); Daniel Sr Santos (UCM); Javier Castro (Universidad Complutense); Daniel Gómez (UCM); Juan Antonio Guevara (UCM); Rosa Espínola Vílchez (Universidad Complutense de Madrid)

Speaker: Inmaculada Gutierrez

15:40 - Comparing user satisfaction of explanations developed with XAI methods [#1866]

Authors: Jonathan Aechtner (Maastricht University); Lena Cabrera (Maastricht University); Dennis Katwal (Maastricht University); Pierre Onghena (Maastricht University); Diego Penroz Valenzuela (Maastricht University); Anna Wilbik (MaastrichtUniversity)

Speaker: Pierre Onghena

16:00 - An Approach to Federated Learning of Explainable Fuzzy Regression Models [#2029]

Authors: José Luis Corcuera Bárcena (University of Pisa); Pietro Ducange (University of Pisa); Alessio Ercolani (University of Pisa); Francesco Marcelloni (University of Pisa); Alessandro Renda (University of Pisa)

Speaker: Alessandro Renda

16:20 - Increasing Accuracy and Explainability in Fuzzy Regression Trees: An Experimental Analysis [#3744]

Authors: Alessio Bechini (University of Pisa); José Luis Corcuera Bárcena (University of Pisa); Pietro Ducange (University of Pisa); Francesco Marcelloni (University of Pisa); Alessandro Renda (University of Pisa)

Speaker: Jose Luis Corcuera Barcena

Session: FUZZ-SS-18 Advances on eXplainable Artificial Intelligence II

Thursday, July 21, 17:00-18:40, Room: Mantegna SA1 , Chair: Jose M. Alonso

17:00 - Can Post-hoc Explanations effectively detect Out-of-Distribution Samples? [#437]

Authors: Aitor Martínez (Tecnalia R&I); Javier Del Ser (TECNALIA); Pablo Garcia-Bringas (University of Deusto)

Speaker: Aitor Martínez

17:20 - Distilling Deep RL Models Into Interpretable Neuro-Fuzzy Systems [#1398]

Authors: Arne R Gevaert (Ghent University)

Speaker: Arne R Gevaert

17:40 - Measuring Model Understandability by means of Shapley Additive Explanations [#2113]

Authors: Ettore Mariotti (CiTIUS-USC); Jose Maria Alonso-Moral (CiTIUS-USC); Albert Gatt (Utrecht University)

Speaker: Ettore Mariotti

18:00 - An Open-Source Software Library for Explainable Support Vector Machine Classification [#3603]

Authors: Marcelo Loor (Ghent University); Ana Tapia-Rosero (ESPOL Polytechnic University); Guy De Tré (Ghent University)

Speaker: Marcelo Loor

18:20 - A new multi-rules approach to improve the performance of the Chi fuzzy rule classification algorithm [#372]

Authors: Leonardo Jara (Universidad de Granada); Antonio González Muñoz (Universidad de Granada); Raul Perez (Universidad de Granada)

Speaker: Antonio Gonzalez

Scope

In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Our focus is on knowledge representation and how to enhance human-machine interaction in the context of eXplainable AI (XAI in short). XAI is an endeavor to evolve AI methodologies and technology by focusing on the development of intelligent agents capable of both generating decisions that a human can understand in a given context, and explicitly explaining such decisions. This way, it is possible to scrutinize the underlying intelligent models and verify if automated decisions are made on the basis of accepted rules and principles, so that decisions can be trusted and their impact justified. Accordingly, XAI systems are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.

XAI involves not only technical but also legal and ethical issues. In addition to the European General Data Protection Regulation (GDPR), a new European regulation on AI is in progress. It is going to remark once again the need to push for a human-centric responsible, explainable and trustworthy AI that empowers citizens to make more informed and thus better decisions. In addition, as remarked in the XAI challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”. Accordingly, humankind require a new generation of XAI systems.

In this session, we aim to discuss and disseminate the most recent advancements focused on XAI, thus offering an opportunity for researchers and practitioners to identify new promising research directions on XAI, with special attention to Explainable Fuzzy Systems.

The session goes a step ahead in the way towards XAI and it is supported by the IEEE-CIS Task Force on Explainable Fuzzy Systems and the H2020 MSCA-ITN-2019 NL4XAI project.

Three main open research problems to be addressed:

  1. Designing explainable models.

  2. Building explanation interfaces.

  3. Measuring explainability and evaluating quality of XAI systems.

We organized previously some related events: XAI@FUZZ-IEEE2021, XAI@WCCI2020, XAI@INLG2019, XAI@FUZZ-IEEE2019, XAI@IEEESMC, IPMU2018, FUZZ-IEEE2017, IFSA-EUSFLAT2015, EUSFLAT 2013, IEEEWCCI 2012, IEEEWCCI 2010, and joint IFSA-EUSFLAT 2009.

Topics

  • Explainable Computational Intelligence

  • Theoretical Aspects of Interpretability

  • Dimensions of Interpretability: Readability versus Understandability

  • Learning Methods for Interpretable Systems and Models

  • Interpretability Evaluation and Improvements

  • Models for Explainable Recommendations

  • Design Issues

  • Applications of XAI Systems

  • Interpretable Machine Learning

  • Interpretable Fuzzy Systems

  • Relations between Interpretability and other Criteria (such as Accuracy, Stability, Relevance, etc.)

  • Explainable Agents

  • Self-explanatory Decision-Support Systems

  • Argumentation Theory for XAI

  • Natural Language Generation for XAI

  • Human-Machine Interaction for XAI

Notes

Papers submitted for this special session are to be peer-reviewed with the same criteria used for the rest of contributed papers to the general track of the conference. As a result, all accepted papers will be included in the proceedings of the FUZZ-IEEE 2022. If you are interested in taking part in this special session, please submit your paper directly through the IEEE-WCCI 2022 submission website selecting the option (instructions for authors):

"Main research topic": Advances on Explainable Artificial Intelligence

Deadlines

  • Title and Abstract submission: January 31, 2022 (strict and mandatory: New submissions cannot be created after this deadline)

  • Paper Submission: February 14, 2022 (11:59 PM AoE) STRICT DEADLINE

  • Acceptance/rejection notification: April 26, 2022

  • Camera-ready paper submission: May 23, 2022

  • Conference dates: July 18-23, 2022

Program Committee

Organizers

  • Rafael Alcalá, University of Granada (Spain)

  • Plamen Angelov, Lancaster University (UK)

  • Alberto Bugarín, University of Santiago de Compostela (Spain)

  • Giovanna Castellano, University of Bari (Italy)

  • Angelo Ciaramella, UNIPARTHENOPE (Italy)

  • Óscar Cordón, University of Granada (Spain)

  • Pietro Ducange, Univesity of Pisa (Italy)

  • Jonathan M. Garibaldi, University of Nottingham (UK)

  • Alexander Gegov, University of Portsmouth (UK)

  • Hisao Ishibuchi, SUSTech (China)

  • Uzay Kaymak, Eindhoven University of Technology (Netherlands)

  • Bart Kosko, University of Southern California (USA)

  • Vladik Kreinovich University of Texas at El Paso (USA)

  • Marie-Jeanne Lesot, Sorbonne Université - LIP6 (France)

  • Edwin Lughofer, Johannes Kepler University Linz (Austria)

  • Jerry M. Mendel, University of Southern California (USA)

  • Yusuke Nojima, Osaka Prefecture University (Japan)

  • Witold Pedrycz, University of Alberta (Canada)

  • Edy Portmann, Human-IST Institute (Switzerland)

  • Ehud Reiter, University of Aberdeen (UK)

  • Tajul Rosli Razak, Universiti Teknologi MARA (Malaysia)

  • Clemente Rubio-Manzano, University of Bio-Bio (Chile)

  • Jose Manuel Soto-Hidalgo, University of Cordoba (Spain)

  • Daniel Sánchez, University of Granada (Spain)

  • Luis Terán, Human-IST Institute (Switzerland)

  • Anna Wilbik, Maastricht University (Netherlands)

  • Shang-Ming Zhou, University of Plymouth (UK)

Jose M. Alonso (josemaria.alonso.moral@usc.es)

Research Centre in Information Technologies (CiTIUS)

University of Santiago de Compostela (USC)

Campus Vida, E-15782, Santiago de Compostela, Spain

Ciro Castiello (ciro.castiello@uniba.it)

Department of Informatics

University of Bari “Aldo Moro”, Bari, Italy

Corrado Mencar (corrado.mencar@uniba.it)

Department of Informatics

University of Bari “Aldo Moro”, Bari, Italy

Luis Magdalena (luis.magdalena@upm.es)

Department of Applied Mathematics, School of Informatics

Universidad Politécnica de Madrid (UPM), Spain