Schedule
Sunday, December 7th 2025, 8am-5pm
San Diego Convention Center, Upper Level Room 25ABC
Accepted Papers on OpenReview - NeurIPS Event Page
Sunday, December 7th 2025, 8am-5pm
San Diego Convention Center, Upper Level Room 25ABC
Accepted Papers on OpenReview - NeurIPS Event Page
The workshop will be held in the San Diego Convention Center, Upper Level Room 25ABC, on Sunday December 7th, 2025, 8am-5pm.
8:15 - 9:00 -- Yiran Huang & Zeynep Akata (Technical University of Munich)
Abstract: As foundation models face the pressure to rapidly adapt to dynamic global changes, In-Context Learning (ICL) offers a mechanism for adaptation without expensive parameter updates. However, the mechanisms governing how Large Language Models (LLMs) extend this capability to new, unseen modalities remain poorly understood. In this talk, I present a systematic reverse-engineering of multimodal ICL, utilizing controlled synthetic tasks to isolate the architectural and statistical drivers of adaptation.
10:00 - 10:45 -- Irina Rish (Mila)
11:00 - 11:45 -- Ludwig Schmidt (Stanford)
13:30 - 14:15 -- Christopher Kanan (University of Rochester)
Abstract: Continual learning must evolve to support lifelong foundation models. Classical continual learning optimized the wrong objective by focusing on catastrophic forgetting under unrealistic storage constraints. In contrast, modern foundation models are limited by compute, not memory, and require update strategies that maximize retention and forward transfer per unit of computation. I will present a framework for compute-bounded replay and recent methods from my lab, which enable efficient updates for large pretrained models. I will also discuss implications for multimodal models, out-of-distribution generalization, and the long-term goal of synthetic minds that acquire and consolidate knowledge over time. Together, these results outline a path toward scalable continual learning as the default training paradigm for foundation models.
14:15 - 15:00 -- Vaggelis Dorovatas & Rahaf Aljundi (Toyoto Motor Europe)
Abstract: Continual learning enables models to adapt to streaming data, but traditional parameter updates risk catastrophic forgetting. In-context learning offers a complementary path. In this talk, I argue that efficient, continuous adaptation need not occur solely in parameter space. I will show how memory-based mechanisms enable rapid adaptation and present some of my work relevant to this, then discuss how combining fast memory-driven updates with slow model consolidation could shape the future of continual learning.
Best Paper
Yuan Yin, Shashanka Venkataramanan, Tuan-Hung VU, Andrei Bursuc, Matthieu Cord
Best Paper Runner-Up
Linxi Zhao, Sofian Zalouk, Christian Belardi, Justin Lovelace, Jin Zhou, Kilian Weinberger, Yoav Artzi, Jennifer Sun
Outstanding Paper - Honorable Mention 1
Idan Shenfeld, Jyo Pari, Pulkit Agrawal
Outstanding Paper - Honorable Mention 2
Abel Gurung, Joseph Campbell