Motivation
In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Actually, AI has been identified as the “most strategic technology of the 21st century” and is already part of our everyday life[1]. The European Commission states that “EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union's values and fundamental rights as well as ethical principles such as accountability and transparency”. It emphasizes the importance of Explainable AI (XAI in short), in order to develop an AI coherent with European values: “to further strengthen trust, people also need to understand how the technology works, hence the importance of research into the explainability of AI systems”. Moreover, as remarked in the XAI challenge stated by the USA Defense Advanced Research Projects Agency (DARPA)[2], “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”. Accordingly, users without a strong background on AI, require a new generation of XAI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.
XAI is an endeavor to evolve AI methodologies and technology by focusing on the development of agents capable of both generating decisions that a human could understand in a given context, and explicitly explaining such decisions. This way, it is possible to scrutinize the intelligent models and verify if automated decisions are made on the basis of accepted rules and principles, so that decisions can be trusted and their impact justified.
Even though XAI systems are likely to make their impact felt in the near future, there is a lack of experts to develop the fundamentals of XAI, i.e., ready to develop and to maintain the new generation of AI systems that are expected to surround us soon. This is mainly due to the inherent multidisciplinary character of this field of research, with XAI researchers coming from heterogeneous research fields (such as Computational Intelligence, Computational Linguistics, Human-Machine Interaction, etc.). Moreover, it is hard to find XAI experts with a holistic view as well as wide and solid background regarding all the related topics.
In the light of the growing and broad interest in the emerging topic of XAI, it is worth noting that about 30% of publications in Scopus (20 October 2017) regarding XAI came from authors well recognized in the field of Fuzzy Logic[3]. This is mainly due to the huge effort made, since the 1980s, by the fuzzy community for carefully design interpretable fuzzy systems, i.e., intelligent systems which exhibit a good interpretability-accuracy trade-off and can be understood by humans. The fuzzy logic formalism let designers combine naturally expert knowledge with knowledge automatically extracted from data, deal with human-centric computing, granular computing, approximate reasoning, computing with words, and so on. Indeed, all these contributions are supported by Zadeh’s pioneer works on fuzzy sets and systems regarding the use of linguistic variables and rules; and having interpretability issues as a main concern when building fuzzy systems. Moreover, since human explanations are naturally verbalized with words in natural language, fuzzy systems are ready to play a key role in the development of XAI systems. Of course, XAI goes beyond building interpretable systems, it faces the challenge of building AI-based systems which are self-explainable, i.e., systems which are able to explain their behavior and justify their decision.
[1] European Commission, Artificial Intelligence for Europe, Brussels, Belgium, “Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions”, Tech. Rep., 2018, (SWD(2018) 137 final) https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe
[2] DARPA Challenge on XAI: https://www.darpa.mil/program/explainable-artificial-intelligence
[3] Jose M. Alonso, C. Castiello, C. Mencar, "A Bibliometric Analysis of the Explainable Artificial Intelligence Research Field", 17th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU), Cádiz, Spain, Springer, CCIS 853:2-15, 2018.