PRESENTERS
Senior AI Policy & Standards Specialist, Piccadilly Labs; Project Leader, CEN/CENELEC JTC 21 (JT021008 - AI Trustworthiness Framework). Dr. Nannini leads development of the AI trustworthiness standard referenced in this tutorial through his role in CEN/CENELEC JTC 21 Working Group 4. Through the NoLeFa consortium, he leads AI compliance training programs for European market surveillance authorities. 15+ publications on AI explainability and regulatory compliance, including recent work on operationalizing explainable AI in the EU regulatory ecosystem.
Dr. Alonso received his degree in Telecommunication Engineering (2003) and PhD (2007) from the Technical University of Madrid (UPM), Spain. He is currently Associate Professor affiliated to the Research Center on Intelligent Technology of the University of Santiago de Compostela (CiTIUS-USC), President of the European Society for Fuzzy Logic and Technology (EUSFLAT), President of the Spanish Network on Trustworthy AI (TELSEC4TAI), Vice-chair of the IEEE-CIS Task Force on Explainable Fuzzy Systems, member of the IEEE-CIS SHIELD Technical Committee and the related AEEA Task Force, member of the IEEE-CIS Task Force on Fuzzy Systems Software, and Associate Editor of the IEEE Computational Intelligence Magazine (IEEECIM) (ISSN:1556-603X). He was the President of the Executive Board and Deputy Coordinator of the H2020-MSCA-ITN-2019 (Grant Agreement No 860621) project entitled “Interactive Natural Language Technology for Explainable Artificial Intelligence” (NL4XAI). He has been a member of the organizing committee of 6 conferences (being Local Organizer of ECAI2024), a member of the program committee (PC) of more than 20 conferences, and an organizer of more than 25 conference events. For example, he co-presented the tutorial “Using fuzzy sets and systems for Explainable Artificial Intelligence – How and Why” at IEEE WCCI 2024. You can see more about Dr. Alonso's research outcomes at https://gitlab.citius.usc.es/jose.alonso/xai
@IEEE WCCI 2026 - Tutorial "Operationalizing AI Explainability for EU AI Act Compliance" (OPXAI-WCCI2026)