Special Session
Advances on eXplainable Artificial Intelligence
Venue: FUZZ-IEEE 2023, Incheon, Korea, August 13-17, 2023
This Special Session takes place on Monday 14, August 2023. It is split in two sessions:
8:30 - 10:30, SS-XAI1 (Room #113) (Chair: Vladik Kreinovich)
048. Explain Reinforcement Learning Agents Through Fuzzy Rule Reconstruction
Liang Ou (University of Technology Sydney)*; Yu-Cheng Chang (University of Technology Sydney); Yukai Wang (University of Technology Sydney); Chin-Teng Lin (University of Technology Sydney, Australia)
067. NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacks
Wenkai Tan (Embry-Riddle Aeronautical University); Justus Renkhoff (University of Maryland, Baltimore County); Alvaro Velasquez (University of Colorado Boulder); Ziyu Wang (Old Dominion University); Lusi Li (Old Dominion University); Jian Wang (University of Tennessee at Martin); Shuteng Niu (Bowling Green State University); Fan Yang (Embry-Riddle Aeronautical University); Yongxin Liu (Embry-Riddle Aeronautical University)
086. Towards causal fuzzy system rules using causal direction
Te Zhang (University of Nottingham)*; Jingda Ying (University of Nottngham); Christian Wagner (University of Nottingham); Jon Garibaldi (University of Nottingham, UK)
095. Fuzzy-Vocabulary-Based Detection and Explanation of Anomalies
Rahul Nath (Department of Informatics, University of Bergen); Grégory Smits (IMT Atlantique / Lab-STICC)*; Olivier Pivert (IRISA - Université Rennes)
232. Towards Explainable Linguistic Summaries
Carla Wrede (Maastricht University)*; Mark H. M. Winands (Maastricht University); Evgueni Smirnov (Maastricht University); Anna Wilbik (MaastrichtUniversity)
13:30 - 15:30, SS-XAI2 (Room #115) (Chair: Anna Wilbik)
138. An Initial Step Towards Stable Explanations for Multivariate Time Series Classifiers with LIME
Han Meng (University of Nottingham)*; Isaac Triguero (Nottingham University); Christian Wagner (University of Nottingham)
149. Human-Oriented Fuzzy Set Based Explanations of Spatial Concepts
Brendan Young (University of Missouri)*; Derek Anderson (University of Missouri); James Keller (University of Missouri, Columbia, USA); Fred Petry (Naval Research Laboratory); Chris Michael (Naval Research Laboratory); Blake Ruprecht (University of Missouri)
164. Federated TSK Models for Predicting Quality of Experience in B5G/6G Networks
José Luis Corcuera Bárcena (University of Pisa); Pietro Ducange (University of Pisa); Francesco Marcelloni (University of Pisa); Alessandro Renda (University of Pisa); Fabrizio Ruffini (University of Pisa)*; Alessio Schiavo (University of Pisa)
188. Knowledge Integration in XAI with Gödel Integrals
Adulam Jeyasothy (Lip6-Sorbonne Université)*; Agnès Rico (Universite Claude Bernard Lyon1); Marie-Jeanne Lesot (LIP6); Christophe Marsala (LIP6, Sorbonne Université); Thibault Laugel (AXA)
210. An Explainable Intrusion Detection System for IoT Networks
Michela Fazzolari (Institute of Informatics and Telematics - National Research Council); Pietro Ducange (University of Pisa)*; Francesco Marcelloni (University of Pisa)
This Special Session is supported by the IEEE-CIS Task Force on Explainable Fuzzy Systems
Scope
In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Our focus is on knowledge representation and how to enhance human-machine interaction in the context of eXplainable AI (XAI in short). XAI is an endeavor to evolve AI methodologies and technology by focusing on the development of intelligent agents capable of both generating decisions that a human can understand in a given context, and explicitly explaining such decisions. This way, it is possible to scrutinize the underlying intelligent models and verify if automated decisions are made on the basis of accepted rules and principles, so that decisions can be trusted and their impact justified. Accordingly, XAI systems are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.
XAI involves not only technical but also legal and ethical issues. In addition to the European General Data Protection Regulation (GDPR), a new European regulation on AI (AI Act) is in progress. It is going to remark once again the need to push for a human-centric responsible, explainable and trustworthy AI that empowers citizens to make more informed and thus better decisions. In addition, as remarked in the XAI challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”. Accordingly, humankind require a new generation of XAI systems.
In this session, we aim to discuss and disseminate the most recent advancements focused on XAI, thus offering an opportunity for researchers and practitioners to identify new promising research directions on XAI, with special attention to Explainable Fuzzy Systems.
The session goes a step ahead in the way towards XAI and it is supported by the IEEE-CIS Task Force on Explainable Fuzzy Systems and the H2020 MSCA-ITN-2019 NL4XAI project.
Three main open research problems to be addressed:
Designing explainable models.
Building explanation interfaces.
Measuring explainability and evaluating quality of XAI systems.
We organized previously some related events: XAI@IEEE-WCCI2022, XAI@FUZZ-IEEE2021, XAI@IEEE-WCCI2020, XAI@INLG2019, XAI@FUZZ-IEEE2019, XAI@IEEE-SMC, IPMU2018, FUZZ-IEEE2017, IFSA-EUSFLAT2015, EUSFLAT 2013, IEEEWCCI 2012, IEEEWCCI 2010, and joint IFSA-EUSFLAT 2009.
Topics
Explainable Computational Intelligence
Theoretical Aspects of Interpretability, Trustworthiness, Fairness and Accountability
Dimensions of Interpretability: Readability versus Understandability
Learning Methods for Interpretable Systems and Models
Interpretability Evaluation and Improvements
Models for Explainable Recommendations
XAI Design Issues
Applications of XAI Systems
Explainable Fuzzy Systems
Interpretable Machine Learning
Relations between Interpretability and other Criteria (such as Accuracy, Stability, Relevance, etc.)
Explainable Agents
Self-explanatory Decision-Support Systems
Argumentation Theory for XAI
Natural Language Generation for XAI
Human-Machine Interaction for XAI
Notes
Papers submitted for this special session are to be peer-reviewed with the same criteria used for the rest of contributed papers to the general track of the conference. As a result, all accepted papers will be included in the proceedings of the FUZZ-IEEE 2023. If you are interested in taking part in this special session, please submit your paper directly through the FUZZ-IEEE 2023 submission website selecting the option:
"Main research topic": Advances on Explainable Artificial Intelligence
Deadlines
Title and Abstract submission: January 31, 2023 (contact SS organizers by email)
Paper Submission: February 15, 2023, March 1, 2023 (11:59 PM AoE)
Acceptance/rejection notification: April 15, 2023
Camera-ready paper submission: June 1, 2023
Conference dates: August 13-17, 2023
Program Committee
Organizers
Rafael Alcalá, University of Granada (Spain)
Plamen Angelov, Lancaster University (UK)
Alberto Bugarín, University of Santiago de Compostela (Spain)
Giovanna Castellano, University of Bari (Italy)
Ciro Castiello, University of Bari (Italy)
Angelo Ciaramella, UNIPARTHENOPE (Italy)
Óscar Cordón, University of Granada (Spain)
Pietro Ducange, Univesity of Pisa (Italy)
Jonathan M. Garibaldi, University of Nottingham (UK)
Alexander Gegov, University of Portsmouth (UK)
Hisao Ishibuchi, SUSTech (China)
Uzay Kaymak, Eindhoven University of Technology (Netherlands)
Bart Kosko, University of Southern California (USA)
Marie-Jeanne Lesot, Sorbonne Université - LIP6 (France)
Edwin Lughofer, Johannes Kepler University Linz (Austria)
Corrado Mencar, University of Bari (Italy)
Jerry M. Mendel, University of Southern California (USA)
Witold Pedrycz, University of Alberta (Canada)
Edy Portmann, Human-IST Institute (Switzerland)
Tajul Rosli Razak, Universiti Teknologi MARA (Malaysia)
Clemente Rubio-Manzano, University of Bio-Bio (Chile)
Jose Manuel Soto-Hidalgo, University of Granada (Spain)
Daniel Sánchez, University of Granada (Spain)
Luis Terán, Human-IST Institute (Switzerland)
Shang-Ming Zhou, University of Plymouth (UK)
Jose M. Alonso-Moral (josemaria.alonso.moral@usc.es)
Research Centre in Intelligent Technologies (CiTIUS)
University of Santiago de Compostela (USC)
Campus Vida, E-15782, Santiago de Compostela, Spain
Anna Wilbik (a.wilbik@maastrichtuniversity.nl)
Department of Advanced Computing Sciences
Maastricht University, The Netherlands
Vladik Kreinovich (vladik@utep.edu)
Computer Science Department
University of Texas at El Paso, USA