Program
Program
TX4Nets PROGRAM
Monday, June 3, 2024
14:00 - 14:20 Welcome Message by the Organizers
Omran Ayoub (University of Applied Sciences of Southern Switzerland, Switzerland) and Tania Panayiotou (University of Cyprus, Nicosia, Cyprus)
14:20 - 15:10 Keynote: Carlos Natalino Da Silva (Chalmers University of Technology, Sweden), Explainable Reinforcement Learning: Towards Trustworthy Autonomous Network Operations
Session Chair: Omran Ayoub (University of Applied Sciences of Southern Switzerland, Switzerland)
15:10 - 16:30 Session 1: Explainability to Interpret and Optimize
Session Chair: Carlos Natalino Da Silva (Chalmers University of Technology, Sweden)
Traffic Prediction- and Explainable Artificial Intelligence-based Dynamic Routing in Software-Defined Elastic Optical Networks
Róza Goscien (Wroclaw University of Science and Technology, Poland)
Explainable Artificial Intelligence for Interpretable Multimodal Architectures with Contextual Input in Mobile Network Traffic Classification
Francesco Cerasuolo (University of Napoli Federico II, Italy), Idio Guarino (University of Naples Federico II, Italy)
Vincenzo Spadari (Università Degli Studi di Napoli Federico II, Italy), Giuseppe Aceto (University of Napoli Federico II, Italy) and Antonio Pescapé (University of Napoli Federico II, Italy)
Navigating Explainable Privacy in Federated Learning
Chamara Sandeepa (University College Dublin, Ireland), Thulitha Senevirathna (University College Dublin, Ireland), Bartlomiej Siniarski (University College Dublin, Ireland), Shen Wang (University College Dublin, Ireland), Madhusanka Liyanage (University College Dublin, Ireland)
Explainable Artificial Intelligence-Guided Optimization of a Multilayer Network Regression Model
Katarzyna Duszynska (Wroclaw University of Science and Technology, Poland), Pawel Polski (Wroclaw University of Science and Technology, Poland), Michal Wlosek (Wroclaw University of Science and Technology, Poland), Aleksandra Knapinska (Wroclaw University of Science and Technology, Poland), Piotr Lechowicz (Chalmers University of Technology, Sweden), Krzysztof Walkowiak (Wroclaw University of Science and Technology, Poland)
16:30 - 17:00 Coffee break
17:00 - 17:50 Tutorial - Jacopo Talpini (University of Milano-Bicocca), Uncertainty Quantification in Neural Networks for Reliable AI-based Network Management
Session Chair: Tania Panayiotou (University of Cyprus, Nicosia, Cyprus)
17:50 - 18:45 Session 2: Explainability, Privacy and Trust
Session Chair: Róza Goscien (Wroclaw University of Science and Technology, Poland)
Invited Talk: The AI Dark Triad of Next Generation Internet
Silvia Giordano (University of Applied Sciences of Southern Switzerland, Switzerland)
Reputation-based Trustworthiness Degree in Interference-variable Vehicular Networks
Claudia Leoni (Roma Tre University, Italy), Anna Maria Vegni (Roma TRE University, Italy), Valeria Loscrí (Inria Lille-Nord Europe, France), Abderrahim Benslimane (University of Avignon & LIA/CERI, France)
Federated Learning for Network Traffic Prediction
Sadananda Behera (National Institute of Technology Rourkela, India), Saroj Kumar Panda (LTIMindtree, India)
Tania Panayiotou (University of Cyprus & KIOS Research Center, Cyprus), Georgios Ellinas (University of Cyprus & KIOS Research and Innovation Center of Excellence, Cyprus)
18:45 - 19:00 Conclusion and Final Remarks
Keynote: Carlos Natalino Da Silvia
Explainable Reinforcement Learning: Towards Trustworthy Autonomous Network Operations
Abstract: TBD
Biography: Carlos Natalino is a Researcher with the Optical Networks Unit. His research focuses on network automation and on the challenges and opportunities for application of machine learning in the network automation context. Over the past years, he has been researching how to leverage machine learning for optical network design and operation, in problems such as resource efficiency (e.g., spectrum). Carlos has been involved in several national and international projects funded by research bodies in EU and Brazil. He has also been involved in teaching computer programming courses in Brazil and Sweden. He is an IEEE and Optica member.
Tutorial: Jacopo Talpini
Uncertainty Quantification in Neural Networks for Reliable AI-based Network Management
Over the past few years, many advances in the field of Deep Learning (DL) have been achieved and nowadays modern DL models are starting to be deployed in our everyday life. However, for many safety-critical applications, as long as scientific research fields, quantifying the uncertainty of DL model predictions plays a crucial role. This tutorial introduces the basics of Bayesian Neural Networks, how they can tackle the problem of estimating model uncertainty, and the most common techniques for generalizing this method to deep neural networks. Additionally, it illustrates how uncertainty quantification can enhance the trustworthiness of machine learning models for network management.
Biography: Jacopo Talpini is a Ph.D. student in Computer Science at the University of Milano-Bicocca. His research interests lie at the intersection of machine learning and security, with a focus on trustworthy machine learning for network intrusion detection.