INTERPOLATE @ NeurIPS 2022
First Workshop on Interpolation and Beyond
New Orleans Convention Center, Friday 2nd of December 2022
About the mixup lunch 🍽
We propose to split attendees into three groups (one with each organizer) to have lunch breakouts around the question "What are the most exciting research directions in interpolation-based methods?". Just before closing remarks, each group can present their findings to the workshop, in an effort to create new research collaborations.
Lunch group 1 with David (last names starting with A-I), lunch group 2 with Yann (last names starting with J-R), lunch group 3 with Boyi (last names starting with S-Z).
Welcome!
Interpolation methods are an increasingly popular approach to regularize deep models. On the one hand, the mixup data augmentation method constructs synthetic examples by linearly interpolating random pairs of training data points. On the other hand, weight interpolation techniques such as "model soups" provide a robust alternative to fine-tune deep learning models, enabling state-of-the-art out-of-distribution generalization. During their half-decade lifespan, interpolation regularizers have become ubiquitous and fuel state-of-the-art results in virtually all domains, including computer vision and medical diagnosis.
Interpolation is becoming a standard tool in machine learning, but our understanding of why and how they work is still in its infancy. Even in simpler problem settings such as supervised learning, it remains a puzzling fact that one can build a better image classifier by training only on random combinations of pairs of examples. What aspect of deep neural networks are these regularizers enforcing? What is the theoretical basis of “learning from multiple examples”? How does weight interpolation achieve ensemble-like, state-of-the-art predictors? What are the similarities and differences between methods that interpolate data and the ones that interpolate parameters?
This workshop brings together researchers and users of interpolation regularizers to foster research and discussion to advance and understand interpolation regularizers. This inaugural meeting will have no shortage of interactions and energy to achieve these exciting goals. We are reserving a few complimentary workshop registrations for accepted paper authors who would otherwise have difficulty attending. Please reach out if this applies to you.
Suggested topics include, but are not limited to the intersection between interpolation regularizers and:
Domain generalization
Learning by interpolating weights
Semi-supervised learning
Privacy-preserving ML
Theory
Robustness
Fairness
Vision
NLP
Medical applications
Schedule
08:15 - 08:30 - Check-in, setup of posters
08:30 - 08:45 - Opening remarks
08:45 - 09:30 - Youssef Mroueh on Interpolating for fairness
09:30 - 10:15 - Sanjeev Arora on Using Interpolation to provide privacy in Federated Learning settings
10:15 - 11:00 - Chelsea Finn on Repurposing Mixup for Robustness and Regression
11:00 - 12:00 - Panel discussion with Chelsea, Sanjeev, Youssef, and external panelists (Hongyi Zhang, Kilian Weinberger, Dustin Tran)
12:00 - 12:30 - Contributed spotlights (6 x 5 minutes)
12:30 - 14:00 - Lunch breakouts with random mixing group and organizers
14:00 - 14:45 - Kenji Kawaguchi on The developments of the theory of Mixup
14:45 - 15:30 - Alex Lamb on Latent Data Augmentation for Improved Generalization
15:30 - 16:15 - Gabriel Ilharco on Robust and accurate fine-tuning by interpolating weights
16:15 - 17:00 - Panel discussion with Kenji, Alex, Gabriel, and external panelists (Mikhail Belkin)
17:00 - 17:45 - Poster Session
17:45 - 18:00 - Closing Remarks
Accepted papers
SMILE: Sample-to-feature MIxup for Efficient Transfer LEarning (spotlight)
Xingjian Li, Haoyi Xiong, Cheng-zhong Xu, Dejing DouGroupMixNorm Layer for Learning Fair Models
Anubha Pandey, Aditi Rai, Maneet Singh, Deepak Bhatt, Tanmoy BhowmikBenefits of Overparameterized Convolutional Residual Networks: Function Approximation under Smoothness Constraint
Hao Liu, Minshuo Chen, Siawpeng Er, Wenjing Liao, Tong Zhang, Tuo ZhaoFedLN: Federated Learning with Label Noise
Vasileios Tsouvalas, Aaqib Saeed, Tanir Özçelebi, Nirvana MeratniaEffect of mixup Training on Representation Learning
Arslan Chaudhry, Aditya Krishna Menon, Andreas Veit, Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv KumarOverparameterization Implicitly Regularizes Input-Space Smoothness
Matteo Gamba, Hossein Azizpour, Mårten BjörkmanMixed Samples Data Augmentation with Replacing Latent Vector Components in Normalizing Flow
GENKI OSADA, Budrul Ahsan, Takashi NishideMomentum-based Weight Interpolation of Strong Zero-Shot Models for Continual Learning (spotlight)
Zafir Stojanovski, Karsten Roth, Zeynep AkataMixup for Robust Image Classification - Application in Continuously Transitioning Industrial Sprays
Huanyi Shui, Hongjiang Li, Devesh Upadhyay, Praveen Narayanan, Alemayehu AdmasuInterpolating Compressed Parameter Subspaces (spotlight)
Siddhartha Datta, Nigel ShadboltCovariate Shift Detection via Domain Interpolation Sensitivity (spotlight)
Tejas Gokhale, Joshua Feinglass, Yezhou YangOver-Training with Mixup May Hurt Generalization
Zixuan Liu, Ziqiao Wang, Hongyu Guo, Yongyi MaoLSGANs with Gradient Regularizers are Smooth High-dimensional Interpolators
Siddarth Asokan, Chandra Sekhar SeelamantulaAlignMixup: Improving Representations By Interpolating Aligned Features
Shashanka Venkataramanan, Ewa Kijak, laurent amsaleg, Yannis AvrithisSample Relationships through the Lens of Learning Dynamics with Label Information (spotlight)
Shangmin Guo, Yi Ren, Stefano V Albrecht, Kenny SmithPre-train, fine-tune, interpolate: a three-stage strategy for domain generalization
Alexandre Rame, Jianyu Zhang, Leon Bottou, David Lopez-PazImproving Domain Generalization with Interpolation Robustness (spotlight)
Ragja Palakkadavath, Thanh Nguyen-Tang, Sunil Gupta, Svetha VenkateshDifferentially Private CutMix for Split Learning with Vision Transformer
Seungeun Oh, Jihong Park, Sihun Baek, Hyelin Nam, Praneeth Vepakomma, Ramesh Raskar, Mehdi Bennis, Seong-Lyun KimOn Data Augmentation and Consistency-based Semi-supervised Relation Extraction
Komal Kumar TeruBranch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models
Margaret Li
Call for papers
Authors are invited to submit short papers with up to 4 pages, but unlimited number of pages for references and supplementary materials. The submissions must be anonymized as the reviewing process will be double-blind. Please use the NeurIPS template for submissions..
Paper submission deadline: September 22, 2022 October 5, 2022 11:59pm Anywhere on Earth
Paper acceptance notification: October 14, 2022 October 19, 2022 11:59pm Anywhere on Earth
This is a non-archival workshop. To foster discussion as much as possible, we welcome both new submissions and works that have been already published during the COVID-19 pandemic. The venue of publication should be clearly indicated during submission for such papers.
Submission Link: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/INTERPOLATE
Invited speakers
Organizers
FAQ
Q. Can I submit work that is under review in a different venue?
Yes, this is a non-archival workshop.
Program committee
Abhishek Sinha (Stanford University)
Badr Youbi Idrissi (Facebook)
Chengzhi Mao (Columbia University)
Christopher Beckham (Polytechnique Montreal)
Daniel Y Fu (Stanford University)
Diane Bouchacourt (Facebook AI Research)
Fabrice Y Harel-Canada (University of California Los Angeles)
Felix Wu (ASAPP Inc.)
Harshavardhan Sundar (Amazon)
Herve Jegou (Facebook)
Hongyang Ryan Zhang (Computer Science, Northeastern University)
Hongyu Guo (University of Ottawa)
Hugo Touvron (Facebook)
Jang-Hyun Kim (Seoul National University)
Jihong Park (Deakin University)
Jonas Ngnawe (Université Laval)
Karsten Roth (University of Tuebingen)
Kimin Lee (Google)
Marianne Abemgnigni Njifon (The University of Goettingen)
Mert Bülent Sarıyıldız (Naver Labs Europe)
Mohammad Kachuee (Amazon)
Nishant Rai (Stanford University)
Shayan Fazeli (University of California, Los Angeles)
Shu Kong (Carnegie Mellon University)
Srijan Das (University of North Carolina at Charlotte)
Tao Sun (State University of New York, Stony Brook)
Tong He (Amazon)
Wang Chen (Google)
Xiang Li (Nanjing University of Science and Technology)
Xiaoqing Tan (University of Pittsburgh)
Xindi Wu (CMU Carnegie Mellon University)
Yaodong Yu (Electrical Engineering & Computer Science Department, University of California Berkeley)
Yu-Xiong Wang (School of Computer Science, University of Illinois at Urbana-Champaign)