We will accept submission on all topics related to continual learning. The particular emphasis of this workshop will include:
Discussing quirks and pitfalls of existing continual learning algorithms. Showing and discussing reasonable scenarios where and why they fail.
Large scale continual learning for real-world problems.
Proposing closer to real-world formulations for continual learning.
We are also particularly interested in submissions addressing the following questions:
What large-scale datasets should we collect and standardized as CL benchmarks?
What are the correct metrics to be monitored in CL systems?
How much task information, such as task descriptors, should CL systems have access to? If any, should this task information be available during training, testing, or both regimes? Should we differentiate methods that have access to task-ids during training+testing, training, testing, or never?
What are the expected benefits of CL systems, and how should these drive research in CL?
Style & Author Instructions
Abstract length: We ask authors to use the official ICML 2020 template and limit submissions to 4 pages excluding references. Authors are welcome to include an appendix; however, reviewers are not required to consult this additional material when assessing the submission.
Dual Submissions: Previously published work (or under-review) is acceptable. We also allow for submissions that have concurrently been submitted to other ICML 2020 workshops.
Double-blind Review: Authors must not include any identifying information of the authors (names, affiliations, etc.) or links and self-references that may reveal the authors' identities.
The organizers aim to provide feedback from three reviewers per submission, which will assess the submission based on relevance, novelty and potential for impact. Reviewers are asked to assess the submission (Reject/Borderline/Accept) as well as provide written feedback. There will be no additional rebuttal period.