Venue: FUZZIEEE 2021 conference, Luxembourg, July 11-14, 2021
This tutorial is supported by the IEEE-CIS Task Force on Explainable Fuzzy Systems
In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Actually, AI has been identified as the “most strategic technology of the 21st century” and is already part of our everyday life[1]. The European Commission states that “EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union's values and fundamental rights as well as ethical principles such as accountability and transparency”. It emphasizes the importance of eXplainable AI (XAI in short), in order to develop an AI coherent with European values: “to further strengthen trust, people also need to understand how the technology works, hence the importance of research into the explainability of AI systems”. Moreover, as remarked in the last challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”. Accordingly, users without a strong background on AI, require a new generation of XAI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.
XAI is an endeavor to evolve AI methodologies and technology by focusing on the development of agents capable of both generating decisions that a human could understand in a given context, and explicitly explaining such decisions. This way, it is possible to verify if automated decisions are made on the basis of accepted rules and principles, so that decisions can be trusted and their impact justified.
Even though XAI systems are likely to make their impact felt in the near future, there is a lack of experts to develop the fundamentals of XAI, i.e., ready to develop and to maintain the new generation of AI systems that are expected to surround us soon. This is mainly due to the inherent multidisciplinary character of this field of research, with XAI researchers coming from heterogeneous research fields. Moreover, it is hard to find XAI experts with a holistic view as well as wide and solid background regarding all the related topics.
Consequently, the main goal of this tutorial is to provide attendees with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to fuzzy-grounded knowledge representation and how to enhance human-machine interaction.
[1] European Commission, Artificial Intelligence for Europe, Brussels, Belgium, “Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions”, Tech. Rep., 2018, (SWD(2018) 137 final) https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe
The tutorial will cover the main theoretical concepts of the topic, as well as examples and real applications of XAI techniques. In addition, ethical and legal aspects concerning XAI will also be considered. Contents are as follows:
Introduction: Motivating Principles and Definitions.
Review of the most Outstanding Approaches for Designing and Developing XAI systems
Review of the most outstanding approaches for Opening Black Boxes
Review of Natural Language Technology for XAI
Review of Argumentation Technology for XAI
Review of Interactive Technology for XAI
Review of Software Tools for XAI
Fuzzy Technology for XAI
Building Interpretable Fuzzy Systems
Building Explainable Fuzzy Systems
Software Tools
Evaluation of XAI systems
Use Cases
Concluding Remarks
The tutorial length is 2 hours, with four blocks of 30min, each covering the different questions under consideration:
We will first introduce the general ideas behind XAI by referring to real-world problems that would take great benefit from XAI technologies. Also, some of the most recent governmental and social initiatives which favor the introduction of XAI solutions in industry, professional activities and private lives will be highlighted. This part is therefore devoted to motivating the audience on the potential impact of XAI in everyday life and, therefore, on the importance of its scientific and technical investigation.
The second part of the tutorial will be devoted to a gentle introduction of the main methods for XAI at the state of the art. This part is cross-field, but it globally falls within the realm of Computational Intelligence. The idea of “opening the black box” will be stressed (being the “black-boxes” models designed through Machine Learning techniques, such as deep neural networks), as well as several approaches for dealing with the concept of “explanation”. Special attention will be focused on natural language (NL) explanation (such as, NL generation of explanations, argumentative techniques, human-machine interactions, etc.).
The third part of the tutorial will take into account the special role of Fuzzy Logic (FL) in XAI. It will be shown that FL offers special features that enable a rich representation of concepts expressible in NL, therefore it might be a privileged choice for explanation generation and processing. Interpretable fuzzy systems use FL to represent knowledge that is easy to be read and understood by users: hence, they will be revisited from the point of view of XAI and NL generation.
Space will be reserved to address some currently open research directions, such as the evaluation of XAI systems, to stimulate the audience, especially young researchers, to explore and contribute to this new field. Finally, the tutorial will offer also the opportunity to present software tools implementing some XAI technologies. Some of these tools will be used to show practical use cases in order to make XAI results tangible to the audience.
Intended Audience
This tutorial is of interest for researchers, practitioners and students (PhD or Master students) working in the fields of Artificial and Computational Intelligence; with special emphasis on Fuzzy Logic. Since our aim is to provide attendees with a holistic view of fundamentals and current research trends in the XAI field, and having in mind the broad interest of the topic, the presentations will be designed to be accessible to everyone no matter their background.
Presenters
Research Centre in Intelligent Technologies (CiTIUS), University of Santiago de Compostela (USC), Campus Vida, E-15782, Santiago de Compostela, Spain
Jose M. Alonso received his M.S. and Ph.D. degrees in Telecommunication Engineering, both from the Technical University of Madrid (UPM), Spain, in 2003 and 2007, respectively. Since June 2016, he is postdoctoral researcher at the University of Santiago de Compostela, in the Research Centre in Intelligent Technologies (CiTIUS). He is currently Chair of the Task Force on “Explainable Fuzzy Systems” in the Fuzzy Systems Technical Committee of the IEEE Computational Intelligence Society, Associate Editor of the IEEE Computational Intelligence Magazine (ISSN:1556-603X), secretary of the ACL Special Interest Group on Natural Language Generation, and coordinator of the H2020-MSCA-ITN-2019 project entitled “Interactive Natural Language Technology for Explainable Artificial Intelligence” (NL4XAI). He has published more than 140 papers in international journals, book chapters and in peer-review conferences. According to Google Scholar (accessed: October 17, 2020) he has h-index=22 and i10-index=48. His research interests include computational intelligence, explainable artificial intelligence, interpretable fuzzy systems, natural language generation, development of free software tools, etc.
Department of Informatics, University of Bari “Aldo Moro”, Bari, Italy
Ciro Castiello, Ph.D. graduated in Informatics in 2001 and received his Ph.D. in Informatics in 2005. Currently he is an Assistant Professor at the Department of Informatics of the University of Bari Aldo Moro, Italy. His research interests include: soft computing techniques, inductive learning mechanisms, interpretability of fuzzy systems, eXplainable Artificial Intelligence. He participated in several research projects and published more than seventy peer-reviewed papers. He is also regularly involved in the teaching activities of his department. He is member of the European Society for Fuzzy Logic and Technology (EUSFLAT) and of the INdAM Research group GNCS (Italian National Group of Scientific Computing).
Department of Informatics, University of Bari “Aldo Moro”, Bari, Italy
Corrado Mencar is Associate Professor in Computer Science at the Department of Computer Science of the University of Bari "A. Moro", Italy. He graduated in 2000 in Computer Science and in 2005 he obtained the title of PhD in Computer Science. In 2001 he was analyst and software designer for some Italian companies. Since 2005 he has been working on research topics concerning Computational Intelligence and Granular Computing. As part of his research activity, he has participated in several research projects and has published over one hundred peer-reviewed international scientific publications. He is also Associate Editor of several international scientific journals, as well as Featured Reviewer for ACM Computing Reviews. He regularly organizes scientific events related to his research topics with international colleagues. Currently, he is Vice-chair of the IEEECIS Task Force on “Explainable Fuzzy Systems”. Research topics include fuzzy logic systems with a focus on Interpretability and Explainable Artificial Intelligence, Granular Computing, Computational Intelligence applied to the Semantic Web, and Intelligent Data Analysis. As part of his teaching activity, he is, or has been, the holder of numerous classes and PhD courses on various topics, including Computer Architectures, Programming Fundamentals, Computational Intelligence and Information Theory.
Department of Applied Mathematics, School of Informatics, Universidad Politécnica de Madrid (UPM), Spain
Luis Magdalena is Full Professor with the Dept. of Applied Mathematics for ICT of the Universidad Politécnica de Madrid. From 2006 to 2016 he was Director General of the European Centre for Soft Computing in Asturias (Spain). Under his direction, the Center was recognized with the IEEE-CIS Outstanding Organization Award in 2012. Prof. Magdalena has been actively involved in more than forty research projects. He has co-author or co-edited ten books including “Genetic Fuzzy Systems”, “Accuracy Improvements in Linguistic Fuzzy Modelling”, and “Interpretability Issues in Fuzzy Modeling”. He has also authored over one hundred and fifty papers in books, journals and conferences, receiving more than 6000 citations. Prof. Magdalena has been President of the “European Society for Fuzzy Logic and Technologies”, Vice-president of the “International Fuzzy Systems Association” and is Vice-President for Technical Activities of the IEEE Computational Intelligence Society for the period 2020-21.
Bibliography
Alonso, J. M., Castiello, C., Magdalena, L., Mencar, C. (2021) “Explainable Fuzzy Systems: Paving the way from Interpretable Fuzzy Systems to Explainable AI Systems”, Studies in Computational Intelligence, Springer, In press
Stepin, I., Catala, A., Pereira-Fariña, M., Alonso, J. M. (2021) "Factual and Counterfactual Explanation of Fuzzy Information Granules", In W. Pedrycz, Shyi-Ming Chen (Eds), “Interpretable Artificial Intelligence: A Perspective of Granular Computing”, Studies in Computational Intelligence, Springer Nature Switzerland AG, In press
Alonso, J. M. (2020) "Teaching Explainable Artificial Intelligence to High School Students", International Journal of Computational Intelligence Systems, 13(1):974-987, https://dx.doi.org/10.2991/ijcis.d.200715.003
Alonso, J. M., Ducange, P., Pecori, R., Vilas, R. (2020) "Building Explanations for Fuzzy Decision Trees with the ExpliClas Software", IEEE World Congress on Computational Intelligence (IEEE-WCCI), Glasgow, Scotland, https://dx.doi.org/10.1109/FUZZ48607.2020.9177725
Alonso, J. M., Toja-Alamancos, J., Bugarin, A. (2020) "Experimental Study on Generating Multi-modal Explanations of Black-box Classifiers in terms of Gray-box Classifiers", IEEE World Congress on Computational Intelligence (IEEE-WCCI), Glasgow, Scotland, https://dx.doi.org/10.1109/FUZZ48607.2020.9177770
Stepin, I., Alonso, J. M., Catala, A., Pereira-Fariña, M. (2020) "Generation and Evaluation of Factual and Counterfactual Explanations for Decision Trees and Fuzzy Rule-based Classifiers", IEEE World Congress on Computational Intelligence (IEEE-WCCI), Glasgow, Scotland, https://dx.doi.org/10.1109/FUZZ48607.2020.9177629
Alonso, J. M. (2019) "From Zadeh's computing with words towards explainable Artificial Intelligence". In: Fuller, R., Giove, S., Masulli, F. (Eds.), WILF2018 - 12th International Workshop on Fuzzy Logic and Applications, Springer, pp. 244-248, https://doi.org/10.1007/978-3-030-12544-8_21
Alonso, J.M. (2019) "Explainable Artificial Intelligence for Kids". In 11th Conference of the European Society for Fuzzy Logic and Technology, Prague (Czech Republic), Atlantis Press, pp. 134-141, https://dx.doi.org/10.2991/eusflat-19.2019.21
Alonso, J. M., Bugarín, A. (2019) "ExpliClas: Automatic generation of explanations in natural language for Weka classifiers". In IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1-6, http://dx.doi.org/10.1109/FUZZ-IEEE.2019.8859018
Alonso, J. M., Bugarín, A., Reiter, E. (2017) "Special Issue on Natural Language Generation with Computational Intelligence". IEEE Computational Intelligence Magazine, 12(3):8-9, http://dx.doi.org/10.1109/MCI.2017.2708919
Alonso, J.M., Casalino, G. (2019) "Explainable Artificial Intelligence for Human-Centric Data Analysis in Virtual Learning Environments", Higher Education Learning Methodologies and Technologies Online, Springer, Vol. 1091, pp. 125-138, https://dx.doi.org/10.1007/978-3-030-31284-8_10
Alonso, J. M., Castiello, C., Mencar, C. (2019) "The role of interpretable fuzzy systems in designing cognitive cities", Designing Cognitive Cities: Linking Citizens to Computational Intelligence to Make Efficient, Sustainable and Resilient Cities a Reality, Springer, Vol. 176, pp. 131-152, https://dx.doi.org/10.1007/978-3-030-00317-3_6
Alonso, J. M., Castiello, C., Mencar, C. (2018) "A bibliometric analysis of the explainable artificial intelligence research field". In 17th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU), Vol. CCIS853, pp. 3-15, https://doi.org/10.1007/978-3-319-91473-2_1
Alonso, J. M., Castiello, C., Mencar, C. (2015). "Interpretability of Fuzzy Systems: Current Research Trends and Prospects". In Kacprzyk, J., Pedrycz, W. (Eds.), Springer Handbook of Computational Intelligence, pp. 219–237, Springer Berlin / Heidelberg, https://doi.org/10.1007/978-3-662-43505-2_14
Alonso, J. M., Ramos-Soto, A., Castiello, C., Mencar, C. (2018) "Hybrid data-expert explainable beer style classifier". In IJCAI/ECAI Workshop on Explainable Artificial Intelligence, pp. 1–5, https://www.dropbox.com/s/jgzkfws41ulkzxl/proceedings.pdf?dl=0
Alonso, J.M., Ramos-Soto, A., Reiter, E., van Deemter, K. (2017) "An Exploratory Study on the Benefits of using Natural Language for Explaining Fuzzy Rule-based Systems", In IEEE International Conference on Fuzzy Systems, Nápoles (Italia), https://dx.doi.org/10.1109/FUZZ-IEEE.2017.8015489
Biran, O., Cotton, C. (2017) "Explanation and justification in machine learning: A survey". In IJCAI Workshop on Explainable AI, pp. 8-13
Gatt, A., Krahmer, E. (2018) "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation". Journal of Artificial Intelligence Research, 61:65-170
Guidotti, R., Monreale, A., Turini, F., Pedreschi, D., Giannotti, F., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D. (2018) "A Survey of Methods for Explaining Black Box Models". ACM Comput. Surv., 51:1-42
Gunning, D. (2016) "Explainable artificial intelligence" (XAI). DARPA, Defense Advanced Research Projects Agency (DARPA), Arlington, USA, Tech. Rep., dARPA-BAA-16-53, https://www.darpa.mil/program/explainable-artificial-intelligence
Lipton, Z.C. (2018) "The Mythos of Model Interpretability". ACM Queue, 16, 30:31--30:57, https://queue.acm.org/detail.cfm?id=3241340
Mencar, C., Alonso, J. M. (2019) "Paving the Way to Explainable Artificial Intelligence with Fuzzy Modeling". In Fuzzy Logic and Applications (12th International Workshop, WILF 2018). Genova (Italy): Springer, https://doi.org/10.1007/978-3-030-12544-8_17
Miller, T. (2019). "Explanation in Artificial Intelligence: Insights from the Social Sciences", Artificial Intelligence, 267:1-38
Pancho, D. P., Alonso, J. M., Magdalena, L. (2013) "Quest for interpretability-accuracy trade-off supported by Fingrams into the fuzzy modeling tool GUAJE". International Journal of Computational Intelligence Systems, 6(sup1):46-60, Atlantis Press, http://dx.doi.org/10.1080/18756891.2013.818189
Pearl, J., Mackenzie, D. (2018) "The book of why. The new science of cause and effect". Basic Books, https://dx.doi.org/10.1126/science.aau9731
Ramos-Soto, A., Alonso, J.M., Reiter, E., van Deemter, K., Gatt, A. (2019) "Fuzzy-Based Language Grounding of Geographical References: From Writers to Readers", International Journal of Computational Intelligence Systems, Atlantis Press, 12(2):970-983, 2019, https://dx.doi.org/10.2991/ijcis.d.190826.002
Ribeiro, M. T., Singh, S., Guestrin, C. (2016) "Why should I trust you? Explaining the predictions of any classifier". In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’16. New York, NY, USA: ACM, pp. 1135–1144, http://doi.acm.org/10.1145/2939672.2939778
Tintarev, N., Masthoff, J. (2015) "Explaining recommendations: Design and evaluation". In Ricci, F., Rokach, L., Shapira, B. (Eds.), Recommender Systems Handbook, Springer, 2015, pp. 353-382, https://doi.org/10.1007/978-1-4899-7637-6_10
Zadeh, L. A. (1999) "From computing with numbers to computing with words - From manipulation of measurements to manipulation of perceptions", IEEE Trans. Circuits Syst.—I: Fundam. Theory Appl, 45(1):105-119
Complementary Material
XAI
NLG
Fuzzy Logic
Recent Related Events
Special session on "Advances on Explainable AI" at IEEE WCCI 2020
Special session on "Software for Soft Computing" at IEEE WCCI 2020
Special session on XAI@FUZZ-IEEE2019
Workshop on XAI@INLG2019
Special session on XAI@IEEESMC2019
Special session on XAI@IPMU2018