Call for Papers
and Author Instructions
Call for Papers
The R0-FoMo Workshop on Robustness of Few-shot and Zero-shot Learning in Foundation Models @ NeurIPS 2023 solicits novel contributions that relate broadly to few-shot and zero-shot learning in Large Foundation models, accepting submissions of long and short papers with both empirical and theoretical nature on recent progress in robustness of few-shot or zero-shot learning and its applications. The event will be held on December 15th, 2023. Relevant topics include (but are not limited to):
In-context learning
Prompt learning
Instruction tuning
Automated evaluation of foundation models
Parameter Efficient Fine-tuning
Multilingual foundation models
Multimodal foundation models
Representation learning and self-supervised learning for foundation models
Responsible AI (Safety, Privacy, Integrity, Fairness, Robustness) using foundation models
Policy optimization (supervised / reinforced) for foundation models
Alignment to human preferences
Human-in-the-loop learning
Synthetic data generation for/from foundation models
Unsupervised learning from foundation models
Adversarial few-shot or zero-shot robustness
Open problems in few-shot and zero-shot learning of large foundation models
Important Dates
Paper Submissions Due: Sep 29, 2023 (AOE) Oct 07, 2023 (AOE)
Notification of Acceptance: Oct 27, 2023
Camera-ready Paper Due: Nov 17, 2023
Workshop Date: Dec 15, 2023
Formatting / Submission Instructions
Submission URL: https://openreview.net/group?id=NeurIPS.cc/2023/Workshop/R0-FoMo
Submission Instructions
Papers can be up to 6 pages (7 for camera ready) in NeurIPS submission format (double-blind) [format], excluding references and supplementary material. We allow an unlimited number of pages for references and supplementary material, but reviewers are not required to review the supplementary material. Accepted papers will be presented at the workshop as contributed talks and/or posters. At the discretion of authors, accepted papers can be published through the workshop website.
Concurrent Submissions
We welcome research papers currently under review at archival NLP and ML conferences (e.g., EMNLP, and ICLR). Submission to this workshop will not break the anonymity or dual submission policies for these conferences. The workshop is non-archival. Please note that we do allow the submission of recently published work. However, when selecting papers for oral presentation, preference is given to original works.
Double-blind reviews
Submissions will be peer-reviewed by at least 2 reviewers, in addition to an area chair. The reviewing process will be double-blind at the level of the reviewers. As an author, you are responsible for anonymizing your submission. Do not include any authors' names, affiliations, acknowledgements, or any other information that could result in de-anonymization.