This tutorial bridges the gap between regulatory mandates and practical implementation through hands-on application of emerging technical standards. The EU regulatory ecosystem presents challenges in balancing legal and sociotechnical drivers for XAI systems, with core tensions emerging on dimensions of oversight, user needs, and litigation.
Artificial Intelligence (AI) is pervading many aspects of our Society. This poses technical (as well as ethical and legal) challenges to prevent people from being overlooked when their own data is processed by AI systems, which may result in decisions that lead to harmful discrimination. The IEEE CIS SHIELD Technical Committee aims to address these challenges by carefully researching Ethical, Legal, Social, Environmental, and Human Dimensions of AI. This tutorial is part of the activities carried out by the AI Ethics Education and Awareness (AEEA) Taskforce of IEEE CIS SHIELD.
We will analyze how tools developed in the context of Explainable Artificial Intelligence (XAI) can assist developers in producing technically robust intelligent agents that are capable of both generating decisions that a human can understand and explicitly explaining such decisions. This way, it is possible to scrutinize the underlying intelligent models and verify if automated decisions are made based on accepted rules and principles, so that decisions can be trusted, and their impact justified. This issue is crucial when considering the European Union (EU) AI Act's requirements for interpretability and explainability, as outlined in Articles 13 and 86. Notice that operationalizing the EU AI Act creates unprecedented technical compliance challenges for AI systems.
L. Nannini, J.M. Alonso-Moral, A. Catalá, M. Lama, S. Barro, "Operationalizing explainable AI in the EU regulatory ecosystem", IEEE Intelligent Systems, 2024, https://doi.org/10.1109/MIS.2024.3383155
It is worth noting that this tutorial is partly supported by the MAIXAI4STRUST project (PID2024-157680NB-I00) funded by MCIN/AEI/ 10.13039/501100011033 and by the "Fundacion Ramon Areces" (Spain) through CONFIA project.
@IEEE WCCI 2026 - Tutorial "Operationalizing AI Explainability for EU AI Act Compliance" (OPXAI-WCCI2026)