Venue: IEEE-WCCI 2024 conference, Yokohama, Japan, June 30 - July 5, 2024
This Special Session is supported by the IEEE-CIS Task Force on Explainable Fuzzy Systems
In the age of Artificial Intelligence (AI), data scientists apply AI techniques to automatically extract knowledge from data. Our focus is on knowledge representation and how to enhance human-machine interaction in the context of eXplainable AI (XAI in short). XAI is an endeavor to evolve AI methodologies and technology by focusing on the development of intelligent agents capable of both generating decisions that a human can understand in each context, and explicitly explaining such decisions. This way, it is possible to scrutinize the underlying intelligent models and verify if automated decisions are made based on accepted rules and principles, so that decisions can be trusted, and their impact justified. Accordingly, XAI systems are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.
XAI involves not only technical but also legal and ethical issues. Many governmental and non-governmental institutions around the world are all pushing
for a human-centric responsible, explainable, and trustworthy AI that empowers citizens to make more informed and thus better decisions (see, e.g., the EU's AI Act, White House's AI Bill of Right, The Global Partnership for Artificial Intelligence, etc.). In addition, as remarked in the XAI challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”. Accordingly, humankind requires a new generation of XAI systems.
In this session, we aim to discuss and disseminate the most recent advancements focused on XAI, thus offering an opportunity for researchers and practitioners to identify new promising research directions on XAI, with special attention to Explainable Fuzzy Systems. The session goes a step ahead in the way from XAI towards Trustworthy AI and it is supported by the IEEE-CIS Task Force on Explainable Fuzzy Systems and the H2020 MSCA-ITN-2019 NL4XAI project (This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860621).
Three main open research problems to be addressed:
Designing explainable models.
Building explanation interfaces.
Measuring explainability and evaluating quality of XAI systems.
We organized previously some related events: XAI@FUZZ-IEEE2023, XAI@IEEEWCCI2022, XAI@FUZZ-IEEE2021, XAI@IEEEWCCI2020, XAI@INLG2019, XAI@FUZZ-IEEE2019, XAI@IEEESMC, IPMU2018, FUZZ-IEEE2017, IFSA-EUSFLAT2015, EUSFLAT 2013, IEEEWCCI 2012, IEEEWCCI 2010, and joint IFSA-EUSFLAT 2009.
Even if this session is open to contributions in the context of XAI in general, major attention will be paid to build and evaluate Explainable Fuzzy Systems (ESFS). Notice that EXFS were born with the aim of paving the way from interpretable machine learning to XAI. Such systems deal naturally with uncertainty and approximate reasoning (as humans do) through computing with words and perceptions. This way, they facilitate humans to scrutinize the underlying intelligent models and verify if automated decisions are made based on accepted rules and principles, so that decisions can be trusted, and their impact justified. Accordingly, EXFS automatically generate factual and counterfactual verbal and non-verbal explanations. Such explanations include linguistic pieces of information which embrace vague concepts, representing naturally the uncertainty inherent to most activities in everyday life.
Explainable Computational Intelligence
Theoretical Aspects of Interpretability & Explainability
Learning Methods for Interpretable Systems and Models
XAI Evaluation
Models for Explainable Recommendations
Design Issues: Data and Model Explainability
Detecting and Preventing Bias in Data and Models
Applications of XAI Systems
Interpretable Machine Learning
Interpretable and Explainable Fuzzy Systems
Relations between Interpretability and other Criteria (such as Accuracy, Stability, Relevance, Privacy, Security, etc.)
Explainable Agents
Self-explanatory Decision-Support Systems
Argumentation Theory for XAI
Natural Language Generation for XAI
Large Language Models and XAI
Human-Machine Interaction for XAI
Papers submitted for this special session are to be peer-reviewed with the same criteria used for the rest of contributed papers to the general track of the conference. As a result, all accepted papers will be included in the proceedings of the FUZZ-IEEE 2024. If you are interested in taking part in this special session, please submit your paper directly through the IEEE-WCCI 2024 submission website selecting the option (instructions for authors):
"Main research topic": Advances on Explainable Artificial Intelligence
Paper Submission: January 15, 2024
Acceptance/rejection notification: March 15, 2024
Camera-ready paper submission (and early registration deadline): May 1, 2024
Conference dates: June 30 - July 5, 2024
Antonio Luca Alfeo, Univesity of Pisa (Italy)
Alberto Bugarín, University of Santiago de Compostela (Spain)
Angelo Ciaramella, UNIPARTHENOPE (Italy)
Pietro Ducange, Univesity of Pisa (Italy)
Jonathan M. Garibaldi, University of Nottingham (UK)
Alexander Gegov, University of Portsmouth (UK)
Hisao Ishibuchi, SUSTech (China)
Uzay Kaymak, Eindhoven University of Technology (Netherlands)
Bart Kosko, University of Southern California (USA)
Marie-Jeanne Lesot, Sorbonne Université - LIP6 (France)
Luis Magdalena, Universidad Politécnica de Madrid - UPM (Spain)
Jerry M. Mendel, University of Southern California (USA)
Witold Pedrycz, University of Alberta (Canada)
Edy Portmann, Human-IST Institute (Switzerland)
Clemente Rubio-Manzano, University of Bio-Bio (Chile)
Jose Manuel Soto-Hidalgo, University of Cordoba (Spain)
Daniel Sánchez, University of Granada (Spain)
Luis Terán, Human-IST Institute (Switzerland)
Anna Wilbik, Maastricht University (Netherlands)
Shang-Ming Zhou, University of Plymouth (UK)
Jose M. Alonso (josemaria.alonso.moral@usc.es)
Research Centre in Information Technologies (CiTIUS)
University of Santiago de Compostela (USC)
Santiago de Compostela, Spain
Corrado Mencar (corrado.mencar@uniba.it)
Department of Computer Science
University of Bari Aldo Moro, Bari, Italy
Vladik Kreinovich (vladik@utep.edu)
Computer Science Department
University of Texas at El Paso, USA
Tajul Rosli Razak (tajulrosli@uitm.edu.my)
School of Computing Sciences
Universiti Teknologi MARA, Malaysia