Foundation models, despite their impressive capabilities, face a critical challenge: they naturally become outdated. Trained on vast datasets, frequently updating these models is expensive. Crucially, these challenges extend beyond the scope of studies in traditional continual learning, as foundation models require rapid and scalable adaptation to dynamic global changes and the emergence of both generalized and specialized tasks. This workshop addresses the urgent need for up-to-date foundation models. We invite researchers to explore cost-effective methods for frequent updates and adaptation, minimizing forgetting and deterioration, ensuring a consistent user experience, and designing dynamic evaluations that remain relevant as models evolve.
Below we list some of the topics and questions that this workshop seeks to answer.
Theoretical foundations of training FMs. What are theoretical guarantees for generalization on naturally evolving distributions? Is there a theoretical tradeoff between learning and forgetting on natural data? How can we model the knowledge storage and update by FMs during pretraining, fine-tuning, merging, and adaption?
Continual pretraining, fine-tuning, merging, and adaptation methods for FMs. How can we efficiently update FMs on generic pretraining data to generalize to new data? Examples include, continual learning methods for training FMs at large-scale such as new optimization methods, training strategies, and model merging.
Compatible pretraining, fine-tuning, merging, and adaptation methods for FMs. Can we avoid making incorrect answers on a question where an older generation of model gives correct answers? Contributions include defining compatibility metrics, model update mechanisms, and training interventions to balance compatibility and generalization tradeoff.
Understanding temporal shift in FMs and data. How can we characterize the rate of distribution shift for generic pretraining data and web data as well as domain-specific and task-specific data? Examples include, data analysis for identifying fast and slow changing data as well as proposing specialized continual learning strategies for specific rates of change such as daily, monthly, or yearly.
Empirical investigation of forgetting and forward transfer in FMs. How can we avoid forgetting old data and deterioration on previous tasks while continually updating FMs? Examples include, learning methods to avoid catastrophic forgetting dataset mixing and replay as well as regularization methods.
Knowledge editing and unlearning in FMs. How should FMs selectively forget information such as personal information?
Developing dynamic benchmarks and evaluations for FMs. How should evaluations and benchmarks for foundation models change over time? Examples include, designing dynamic evaluations and benchmarks that automatically extend over time.
Designing robust evaluation protocols for using evolving FMs as evaluators. How can we evaluate backward compatibility and robustness when updating FMs? Examples include, evaluation metrics for FMs used in other ML systems as evaluators or data synthesizers, and evaluating FMs when used as assistants.
To submit your paper, please consider the following instructions and guidelines.
All contributions should be made via OpenReview. We welcome submissions of original, unpublished material, as well as work that is currently under review (i.e. has been submitted but not yet accepted elsewhere).
Page limit: Papers should be up to 4 pages, excluding references and supplementary materials.
Template: Please use the NeurIPS 2025 style files.
Double blind reviews: authors should anonymize their submissions to ensure a double blind review process.
LLMs policy: In the preparation of your contributions, the use of LLMs is allowed only as a general-purpose writing assist tool.
Submission of published work: As noted in NeurIPS workshop guidelines, we discourage submitting work that has been previously published in other conferences on machine learning or related fields. Work that is presented at the main NeurIPS conference should not appear in a workshop, including as part of an invited talk. We welcome submissions of original, unpublished material, as well as work that is currently under review (i.e., has been submitted elsewhere).
Publication. The workshop is non-archival. By default, accepted papers will be made publicly available on OpenReview. Authors can choose to opt out if they do not wish for their work to be shared publicly.
Reviewing. Authors should nominate at least one person per contribution as a reviewer. The expected reviewing load is 2-3 papers. If you'd like to nominate someone as a reviewer or self-nominate, please fill in this form.
Attending the workshop. Our workshop is primarily an in-person event, and authors are asked to present a poster at the workshop if possible. A subset of papers will be selected for presentation in short spotlight talks.
Dual Submission policy. We accept dual submission with other workshops that do not happen during the same date.
Workshop acceptance notification: July 4, 2025.
Submission open: July 22, 2025, AoE. (Open Now)
Submission deadline: August 22, 2025, AoE. September 1, 2025, AoE. September 2, 2025, AoE.
Reviews due: September 12, 2025, AoE. September 16, 2025, AoE.
Decision notification: on or before September 22, 2025.
For any questions, please contact ccfm-neurips2025@googlegroups.com.
Camera-ready papers are allowed maximum of 6 pages, excluding references and appendices.
All papers must follow the NeurIPS 2025 style files.
We will be giving best paper awards.
Thank you for serving on the program committee as a reviewer. Your commitment and time investment is crucial for the success of the CCFM workshop, and we are deeply grateful for your effort!
Below we summarize some key dates and instructions for the reviewing process:
You should expect to be assigned between 2-3 papers of 4 pages maximum length each for review.
The reviewing period starts on August 22nd and ends on September 12th 2025. Please make sure to submit all reviews for your assigned papers by the end of the reviewing period.
When writing your review, please keep in mind that after decisions have been made, reviews and meta-reviews of accepted papers will be made public.
When writing your review, please provide your thoughts on the following aspects:
Summary: Briefly summarize the paper and its contributions in 2-3 sentences. This is not the place to critique the paper; the authors should generally agree with a well-written summary.
Strengths and Weaknesses: Please provide an assessment of the strengths and weaknesses of the paper, touching on each of the following dimensions:
Alignment to theme of workshop and interest to workshop audience (see CfP here: https://sites.google.com/view/ccfm-neurips2025/call-for-papers)
Originality and significance
Clarity of writing
Reproducibility
Rating: Based on your assessment of strengths and weaknesses, please provide an assessment as to whether this paper should be accepted to the workshop.
We thank you again for being a part of the program committee. Should you have any questions or concerns, please reach out to us via ccfm-neurips2025@googlegroups.com.