Special Session at the 2026 IEEE International Conference on Evolving and Adaptive Intelligent Systems 2026 (EAIS 2026)
University of Pisa, Italy
Sept 21st - Sept 23rd 2026
Large Language Models (LLMs) have achieved remarkable performance across a wide range of tasks, yet they suffer from a fundamental limitation: once trained, their knowledge remains largely static. Updating or correcting this knowledge typically requires expensive and impractical retraining procedures, making current models ill-suited for dynamic, real-world environments.
This special session focuses on how to enable LLMs to evolve after deployment, allowing them to adapt to new information, revise outdated or incorrect facts, selectively forget learned content, and remain reliable over time. The session brings together researchers working on continual learning, knowledge editing, machine unlearning, and parameter-efficient adaptation techniques for LLMs.
As real-world information changes, static language models increasingly produce outdated or incorrect outputs. While Retrieval-Augmented Generation (RAG) methods can partially mitigate this issue by injecting external knowledge at inference time, they do not fully address outdated internal representations or scenarios where retrieval is impractical.
The inability to efficiently update, adapt, or forget knowledge raises significant practical, ethical, and safety concerns, especially as LLMs are deployed in high-impact and safety-critical applications. This special session aims to explore methods and evaluation frameworks that allow LLMs to evolve continuously, without full retraining and without catastrophic forgetting.
Recent advances in continual and incremental pre-training, targeted knowledge editing, and machine unlearning demonstrate promising directions toward adaptive language models. By bringing these communities together, the session aims to foster cross-fertilization and accelerate progress toward reliable, adaptive, and accountable language technologies.
We invite original research contributions on (but not limited to) the following topics:
Online and continual learning methods for LLMs
Knowledge editing and targeted model updates
Machine unlearning and selective forgetting
Parameter-efficient adaptation strategies, including test-time and inference-time methods
Evaluation protocols and benchmarks for adaptability, retention, and forgetting
Automated knowledge updating in Retrieval-Augmented Generation (RAG) systems, including ingestion and incremental updates
Paper Submission Deadline: March 15, 2026
Notification of Acceptance: May 15, 2026
Camera-Ready Submission: June 15, 2026
Author Registration Deadline: June 30, 2026
Conference Dates: September 21–23, 2026
Papers must be submitted according to the IEEE EAIS 2026 submission guidelines.
For formatting instructions and submission details, please refer to the main conference website
All submissions will be peer-reviewed and evaluated based on originality, technical quality, relevance to the special session, and clarity of presentation.
Alessandro Bondielli
Antonio Carta
Andrea Cossu
Lucia Passaro
Department of Computer Science, University of Pisa
Contact < alessandro dot bondielli at unipi dot it>