Abstract
Modern recommender systems operate in uniquely dynamic settings: user interests, item pools, and popularity trends shift continuously, and models must adapt in real time without forgetting past preferences. While existing tutorials on continual or lifelong learning cover broad machine learning domains (e.g., vision and graphs), they do not address recommendation-specific demands—such as balancing stability and plasticity per user, handling cold-start items, and optimizing recommendation metrics under streaming feedback.
This tutorial aims to make a timely contribution by filling that gap. We begin by reviewing the background and problem settings, followed by a comprehensive overview of existing approaches. We then highlight recent efforts to apply continual learning to practical deployment environments, such as resource-constrained systems and sequential interaction settings.
Finally, we discuss open challenges and future research directions. We expect this tutorial to benefit researchers and practitioners in recommender systems, data mining, AI, and information retrieval across academia and industry.
The audience is expected to have basic knowledge of probability, linear algebra, and machine learning, but no prior familiarity with specific continual learning algorithms is required.
Outline and Timeline
This tutorial is designed to run for a half day, totaling approximately 3 hours including short breaks. It will be organized into the following parts:
Part I: Introduction and Background (30 minutes)
Problem definitions and settings
Key challenges
Applications and use cases
Part II: Experience-Replay-based Methods (35 min)
Sample selection for experience replay
Replay-based model enhancement
Part III: Regularization-based Methods (35 min)
What knowledge to regularize
Which temporal knowledge to regularize
Personalization of regularization
Part IV: Beyond Traditional Settings (35 min)
Resource-constrained environments
Sequential interaction environments
Part V: Open Challenges and Future Directions (35 min)
Trustworthiness (e.g., fairness, explainability, robustness)
Adaptation to foundational models
Unified models for recommendation and search
Speakers
The presenters and contributors of this tutorial are Seunghan Lee, Seunghyun Baek, Dojun Hwang, Hyunsik Yoo, and SeongKu Kang. Their biographies and areas of expertise are provided below.
Seunghan Lee is a first-year Master’s student in the Department of Computer Science and Engineering at Korea University. His research focuses on learning from heterogeneous information in recommender systems, encompassing multi-modal content and complex user behaviors, as well as continual recommender systems. His work has been published in major conferences, including CIKM.
Seunghyun Baek is a first-year Master's student in the department of Computer Science and Engineering at Korea University. He has worked on designing continually updated multi-stage pipeline for recommender systems. His research interests lie in LLM-based recommendation systems and continual learning for recommendation.
Dojun Hwang is currently a final year B.S. student in the Department of Computer Science and Enginnering at Korea University. He has worked on designing recommender system, especially large language models as a re-ranker. His research interests are large language models for recommendation and information retrieval.
Hyunsik Yoo is a fourth-year Ph.D. student in the Siebel School of Computing and Data Science at the University of Illinois Urbana-Champaign. His research focuses on developing data mining and machine learning techniques for recommender systems and graph mining models that are adaptive, trustworthy, and user-inclusive. His work has been published in major conferences, including KDD, SIGIR, TheWebConf, WSDM, and ICML. He has also served as a program committee member or reviewer for venues such as KDD, CIKM, TheWebConf Companion, AAAI, NeurIPS, DSAA, and TIST.
For more information, please refer to his personal website at https://sites.google.com/view/hsyoo.
SeongKu Kang is an Assistant Professor in the Department of Computer Science and Engineering at Korea University. Prior to that, he was a postdoctoral researcher at the University of Illinois Urbana-Champaign. His research interests lie in data mining, recommender systems, and information retrieval. He has published more than 30 papers in major conferences such as KDD, TheWebConf, CIKM, SIGIR, and EMNLP. He received the Stars of Tomorrow Award from Microsoft Research Asia in 2023, and his paper was selected as a Best Paper at WSDM 2025. He has also actively contributed to the research community by serving as a program committee member or reviewer for venues including KDD, TheWebConf, AAAI, SIGIR, ACL, SDM, TIST, and TKDE, and was recognized as an outstanding reviewer at KDD.
For more information, please refer to his personal website at https://seongku-kang.github.io/.
References
This list includes the prior works covered in this tutorial:
Ahrabian, K., Xu, Y., Zhang, Y., Wu, J., Wang, Y., & Coates, M. (2021, October). Structure aware experience replay for incremental learning in graph-based recommender systems. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (pp. 2832-2836).
Mi, F., Lin, X., & Faltings, B. (2020, September). Ader: Adaptively distilled exemplar replay towards continual learning for session-based recommendation. In Proceedings of the 14th ACM Conference on Recommender Systems (pp. 408-413).
Cai, G., Zhu, J., Dai, Q., Dong, Z., He, X., Tang, R., & Zhang, R. (2022, July). Reloop: A self-correction continual learning loop for recommender systems. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 2692-2697).
Zhu, J., Cai, G., Huang, J., Dong, Z., Tang, R., & Zhang, W. (2023, August). ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 5728-5738).
Zhang, X., Chen, Y., Ma, C., Fang, Y., & King, I. (2024, March). Influential exemplar replay for incremental learning in recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 8, pp. 9368-9376).
Qin, J., Liu, W., Zhang, W., & Yu, Y. (2025, April). D2K: Turning historical data into retrievable knowledge for recommender systems. In Proceedings of the ACM on Web Conference 2025 (pp. 472-482).
Wang, Y., Zhang, Y., Valkanas, A., Tang, R., Ma, C., Hao, J., & Coates, M. (2023, June). Structure aware incremental learning with personalized imitation weights for recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 4, pp. 4711-4719).
Xu, Yishi, et al. "Graphsail: Graph structure aware incremental learning for recommender systems." Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2020.
Wang, Y., Zhang, Y., & Coates, M. (2021, October). Graph structure aware contrastive knowledge distillation for incremental learning in recommender systems. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (pp. 3518-3522).
Yoo, H., Kang, S., Qiu, R., Xu, C., Wang, F., & Tong, H. (2025). Embracing plasticity: Balancing stability and plasticity in continual recommender systems. In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval.
Lee, G., Kang, S., Kweon, W., & Yu, H. (2024, August). Continual Collaborative Distillation for Recommender System. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 1495-1505).
Lee, G., Yoo, H., Hwang, J., Kang, S., & Yu, H. (2025). Leveraging Historical and Current Interests for Continual Sequential Recommendation. arXiv preprint arXiv:2506.07466.
Liu, L., Cai, L., Zhang, C., Zhao, X., Gao, J., Wang, W., ... & Li, Q. (2023, July). Linrec: Linear attention mechanism for long-term sequential recommender systems. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 289-299).
Yoo, H., Li, T.W., Kang, S., Liu, Z., Xu, C., Qi, Q., Tong, H. (2025). Continual low-rank adapters for llm-based generative recommender systems. arXiv preprint arXiv:2510.25093.
Chen, H., Razin, N., Narasimhan, K., Chen, D. (2025). Retaining by doing: The role of on-policy data in mitigating forgetting. arXiv preprint arXiv:2510.18874.
Lai, S., Zhao, H., Feng, R., Ma, C., Liu, W., Zhao, H., Lin, X., Yi, D., Xie, M., Zhang, Q., Liu, H., Meng, G., Zhu, F. (2025). Reinforcement fine-tuning naturally mitigates forgetting in continual post-training. arXiv preprint arXiv:2507.05386.