International Workshop on Trustworthiness and Reliability in Neurosymbolic AI will be held in the International Joint Conference in Neural Networks 2025 (IJCNN25)
Neurosymbolic AI is a new growing trend that has been able to merge the recent advances in AI through deep learning with logic and rule-based methods. The application of neurosymbolic AI seems to be spreading across many contexts, from image classification to Visual Question Answering. It has an expressive and semantic power that is not usually provided by classical deep learning. In addition, it has great potential in explainability, due to the logic-based methodologies that are a strong component of this context. These capabilities are only a starting point in XAI, they need to be explored in detail to facilitate the interpretability of whole models, in any part of them and not only in terms of the natural relationships that can be discovered. This workshop will focus on two main aspects:
Gives the possibility to extend the exploration of the explainability in multiple contexts, developing and adapting techniques that can be useful to relate classical deep learning approaches to their symbolic extensions.
Opens to the possibility to extend the understanding of neurosymbolic models to a deep and not always analyzed aspect in AI, namely the Resilience of a methodology.
These two concepts lead to the trustworthiness and reliability of the new trend of Neurosymbolic AI, increasing the confidence of end-users in choosing and believing in these methodologies for computer-aided applications.
Topics of interest include, but are not limited to:
Soft Computing methodologies
Symbolic and Neurosymbolic AI
Logic and Rule-based methods
AI methods for explainability, interpretrability and reliability
Resilient AI models
From wide to task-specific methodologies
Machine Learning and Deep Learning based AI methodologies
Data-driven decision-making
Multimodal Learning strategies
Scalability and optimization of intelligent systems
Generative AI and applications
The workshop will be held on July 5th, from 2:00 PM to 5:00 PM. It will include a standard paper presentation and a poster session.
Full program:
2:00PM - Keynote Invited Speaker: Riccardo Rizzo
2:40PM - Challenge results: Explainable AI for Educational Question-Answering
First Session: Trustworthiness and Explanability for Reasoning (Chair Angelo Casolaro)
3:00PM - Speaking in Words, Thinking in Logic: A Dual-Process Framework in QA Systems (Remote)
3.20PM - Topology-Driven Explainable GNNs for Trojan Detection in Deep Learning
3:40PM - Break 20'
Second Session: Explainability in Network Optimization (Chair Massimiliano Giordano Orsini)
4:00PM - Parallel Optimization of Quantized CNNs for Efficient GPU Inference
4:20PM - Stability-aware Neuromorphic Computation
4:40PM - Announcements and closing
Riccardo Rizzo - ICAR CNR - Personal Page
Title. Graph Neural Networks and Explainability
Abstract. Graph neural networks are algorithms capable of processing graph data, and, like many trained neural networks, are black boxes, making them difficult to debug or modify. Moreover, it is difficult for the researcher or the user to extract new knowledge from their predictions. This problem is more present in Graph Neural Networks due to the nature of their input, making the study of methods and algorithms that explain the processing mechanisms of graph neural networks an active field of research.
One of the most common explainability mechanisms involves observing the network's inputs to build a subgraph containing the characteristics useful for network prediction. But can we use some graph properties to speed up this process?
Riccardo Rizzo is a senior researcher at the Institute for High Performance Computing and Networking of the CNR. His research focuses on machine learning and applications to biomedical data analysis and genomic sequence analysis.
He was a visiting Professor at the University of Pittsburgh in 2001.
He has been a member of the program committees of many national and international conferences and is a member of the editorial board of the international journals BMC Bioinformatics, Frontiers in Genomics and Frontiers in Neuroinformatics. He is the author of more than 150 scientific articles, 33 of which in international journals such as "IEEE Transactions on Neural Networks", "Neural Computing and Applications", “Neural Processing Letters", “BMC Bioinformatics". He has participated in numerous national projects.
The workshop is growing fast and organizers are happy to announce that a challenge has been proposed! The topic is Explainable AI for Educational Question-Answering. All details can be found in the challenge page.
Submissions must follow the IJCNN2025 rules (https://2025.ijcnn.org/authors/initial-author-instructions). Authors are invited to submit:
Full papers, up to 8 pages
Short papers, up to 4 pages
Short papers may also be presented in the poster session. Full and short papers will be published in the conference proceedings.
Submission must be made on CMT using the following link https://cmt3.research.microsoft.com/IJCNN2025/Track/3/Submission/Create
Paper Submission Deadline – March 27, 2025 - deadline extension
Paper Acceptance Notification – April 15, 2025
Prof. Angelo Ciaramella - University of Naples Parthenope, Italy
Prof. Le Hoang Son - Vietnam National University, Hanoi, Vietnam
Prof. Emanuel Di Nardo - University of Naples Parthenope, Italy
Prof. Alessio Ferone - University of Naples Parthenope, Italy
Prof. Antonio Maratea - University of Naples Parthenope, Italy
Prof. Ihsan Ullah - Insight SFI Research Center for Data Analytics, University of Galway, Galway, Ireland
Prof. Paola Barra - University of Naples Parthenope, Italy
Dr. Lorenzo Di Rocco - Sapienza University of Rome, Italy
Prof. Quan Thanh Tho - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Prof. Nguyen Duc Anh - Department of IT and Economics, University of South Eastern Norway, Norway
Prof. Fabien Baldacci - Université de Bordeaux, France
Prof. Bui Hoai Thang - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Dr. Tran Thanh Tung - School of Computer Science and Engineering, Ho Chi Minh City International University, Vietnam
Mr. Nguyen Song Thien Long - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Mr. Vo Hoang Nhat Khang - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Ms. Nguyen Hoang Anh Thu - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Mr. Nguyen Quang Duc - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Mr. Bui Cong Tuan - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Prof. Florentin Smarandache - Emeritus Professor, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA
Dr. Said Broumi - Laboratory of Information Processing, Faculty of Science Ben M'Sik, University Hassan II, Casablanca, Morocco
Prof.Habil.Dr.Eng. Valentina Emilia Balas - "Aurel Vlaicu” University of Arad, 77 B-dul Revolutiei , 310130 Arad, Romania
Dr. Rohit Sharma - ABES Engineering College, Ghaziabad 201009, India
Prof. Tran Van Lang - Vietnam Academy of Science and Technology, Vietnam
Prof. Ganeshsree Selvachandran - School of Business, Monash University Malaysia
Prof. Raghvendra Kumar - Department of CSE, GIET University, Gunupur, India
Prof. Vassilis C. Gerogiannis - Department of Digital Systems (Head of the Department), University of Thessaly, GR 41500, Larissa, Greece
Prof. Hiep Xuan Huynh - Can Tho University, Can Tho, Vietnam
Prof. Dr. Horacio González-Vélez - Professor of Computer Systems and Founding Head of The Cloud Competency Centre, National College of Ireland
Dr. Nicola Bena - University of Milan, Italy
Prof. David Taniar - Faculty of Information Technology, Monash University, Australia
Mr. Andrea Capone - University of Naples Federico II, Italy