Paper Submission Deadline (Extended): Jan 30, 2025
In an era dominated by data-driven insights, designing and building trustworthy Computational Intelligence (CI) systems presents fundamental challenges, especially when leveraging data from decentralized sources. Privacy regulations and ethical concerns have raised the demand for CI systems that are both collaborative and reliable, all while upholding stringent standards of data privacy and transparency. Federated Learning (FL) has emerged as a groundbreaking solution to address these challenges. By allowing multiple parties—such as institutions, corporations, or devices—to collaboratively train models without directly sharing their data, FL provides a paradigm shift that enables decentralized learning while preserving data privacy. This approach minimizes the risk of exposing sensitive information and aligns well with evolving regulatory frameworks, such as the GDPR in Europe, which prioritizes data sovereignty.
However, to truly achieve trustworthiness in CI, ensuring privacy is only one piece of the puzzle. Explainability and transparency are equally critical, as they allow end-users and stakeholders to understand and trust the decisions made by these systems. The need for model interpretability becomes particularly complex in a Federated Learning setting, where data resides on diverse, decentralized nodes. Typically, explainability is achieved either by designing inherently interpretable models or through post-hoc interpretability techniques that elucidate model behavior after training. Yet, when these techniques are applied in the FL context, maintaining both privacy and interpretability poses significant technical and ethical challenges.
The aim of this special session is to provide a multidisciplinary and international forum to discuss emerging methodologies for the collaborative learning of CI systems that are both private and trustworthy. This session seeks to explore advanced FL techniques that incorporate robust privacy measures alongside mechanisms for explainability, with the overarching goal of fostering CI models that users and stakeholders can trust. Attendees from various fields—such as machine learning, ethics, law, and policy—are invited to contribute perspectives and solutions that address the unique challenges of building decentralized, privacy-preserving, and interpretable CI systems.
This Special Session is supported by the project “FAIR - Future Artificial Intelligence Research” - Spoke 1 - funded by the European Union under the NextGenerationEU programme.
The topics of interest include (but are not limited to):
Trustworthy Collaborative Artificial Intelligence
Collaborative/Federated Learning of Explainable Artificial Intelligence Models
Collaborative/Federated Learning of Interpretable by-design models
Collaborative/Federated Learning and post-hoc explainability techniques
Collaborative/Federated Learning of Trustworthy Large Language Models
Collaborative/Federated Supervised, Semi-Supervised and Unsupervised Learning for Fuzzy and Neuro-Fuzzy Models
Collaborative/Federated Supervised, Semi-Supervised and Unsupervised Learning for Neural Networks and Deep Learning Models
Neural Networks and Fuzzy sets theory for Collaborative/Federated Learning
Privacy Preserving Machine Learning
Threats, attacks and defenses to Federated Learning
Applications of Trustworthy Collaborative Artificial Intelligence
Paper Submission Deadline (Extended): Jan 30, 2025
Please submit your paper directly through the IJCNN-2025 submission website
as a regular paper (Main Track) by selecting the special session
“Collaborative Learning of Trustworthy Computational Intelligence Systems (CLOTHES 2025)” as primary Subject Area
Submitted papers will be peer-reviewed with the same criteria of other IJCNN tracks.
The papers accepted for the special session will be included in the IJCNN 2025 proceedings and will be published by the IEEE Xplore Digital Library.
Organizing Committee
Pietro Ducange, University of Pisa, Italy
Michela Fazzolari, Istituto di Informatica e Telematica, CNR, Italy
Francesco Marcelloni, University of Pisa, Italy
Alessandro Renda, University of Pisa, Italy
Publicity and Communication Chair:
José Luis Corcuera Bárcena, University of Pisa, Italy
Corresponding Chair's E-mail:
pietro.ducange@unipi.it