Special Session at the 2026 IEEE International Conference on Evolving and Adaptive Intelligent Systems 2026
University of Pisa, Italy
Sept 21st - Sept 23rd 2026
The emergence of Foundation Models has reshaped the AI landscape, delivering unprecedented performance across a wide range of tasks. Yet, despite their impressive capabilities, these models still lean heavily on the i.i.d. assumption and often struggle to generalize across tasks that evolve over time—particularly when confronted with significant distribution shifts. The problem is intensified by the massive size of today’s Foundation Models—often built with billions of parameters—which makes them heavy, resource-intensive, and slow to finetune, ultimately limiting their responsiveness in dynamic scenarios.
Catastrophic forgetting remains an unresolved issue even at this scale, affecting virtually all data modalities, including language and vision. As a result, advancing the field still requires research efforts in Continual Learning to mitigate forgetting and enable models that can adapt sustainably and evolve over time.
In this special session, we aim to explore the challenges and opportunities at the intersection of Transformer-based architectures and Continual Learning. We welcome contributions proposing novel benchmarks and evaluation protocols, as well as innovative methodologies that support efficient, robust adaptation in continually changing environments.
Topics of interest include, but are not limited to:
Benchmarks and Training Environments for Foundation Models and Continual Learning
Evaluation Metrics and Protocols tailored to Foundation Models in Continual Learning
Efficient Solutions for Large Models Continual Adaptation
Few-shot Continual Fine-Tuning of Large Models
Online Continual Adaptation of Large Models
Applications of Transformer-based Architecture in Continual Learning
Continual Reinforcement Learning with Foundation Models
Foundation Models for Efficient Adaptation in Robotics
We follow the call-for-paper indications provided for the main conference. You can find them here.
Paper Submission Deadline: March 15th, 2026
Luigi Quarantiello is a PhD candidate in the Italian National PhD Program in Artificial Intelligence at the University of Pisa. He is member of the COLLAGE lab and of the Computational Intelligence and Machine Learning group. His research focuses on the efficient adaptation of Foundation Models to dynamic and evolving scenarios, aiming to bridge the gap between large-scale AI models and practical deployment in complex environments.
Irene Testa is a PhD student in the Italian National PhD Program in Artificial Intelligence at the University of Pisa and a member of the COLLAGE Lab. Her research interests include continual learning and the development of robust and efficient training strategies for models operating in dynamic and evolving environments.
Vincenzo Lomonaco is an Associate Professor in the Department of AI, Data and Decision Sciences at Luiss Guido Carli University, where he leads the COLLAGE Lab. He is also the co-founder of ContinualIST, a spin-off of the University of Pisa, a founding member of the non-profit organization ContinualAI, and the Principal Investigator of several research projects and industrial collaborations. In just six years of research following his PhD, he has secured over €2 million in funding, including the prestigious Italian Science Fund (FIS) – Starting Grant, a PRIN, and collaborations with international organizations such as Intel, Meta, Leonardo, and ESA. Over the past decade, he has published more than 80 scientific papers in top-tier conferences and journals on the topic of Sustainable Artificial Intelligence. His pioneering research in continual and decentralized machine learning has been recognized with the prestigious Marco Somalvico Award 2025 from the Italian Association for Artificial Intelligence.