INTERPOLATE @ NeurIPS 2022

First Workshop on Interpolation and Beyond

New Orleans Convention Center, Friday 2nd of December 2022

About the mixup lunch 🍽

We propose to split attendees into three groups (one with each organizer) to have lunch breakouts around the question "What are the most exciting research directions in interpolation-based methods?". Just before closing remarks, each group can present their findings to the workshop, in an effort to create new research collaborations.

Lunch group 1 with David (last names starting with A-I), lunch group 2 with Yann (last names starting with J-R), lunch group 3 with Boyi (last names starting with S-Z).

Welcome!

Interpolation methods are an increasingly popular approach to regularize deep models. On the one hand, the mixup data augmentation method constructs synthetic examples by linearly interpolating random pairs of training data points. On the other hand, weight interpolation techniques such as "model soups" provide a robust alternative to fine-tune deep learning models, enabling state-of-the-art out-of-distribution generalization. During their half-decade lifespan, interpolation regularizers have become ubiquitous and fuel state-of-the-art results in virtually all domains, including computer vision and medical diagnosis.

Interpolation is becoming a standard tool in machine learning, but our understanding of why and how they work is still in its infancy. Even in simpler problem settings such as supervised learning, it remains a puzzling fact that one can build a better image classifier by training only on random combinations of pairs of examples. What aspect of deep neural networks are these regularizers enforcing? What is the theoretical basis of “learning from multiple examples”? How does weight interpolation achieve ensemble-like, state-of-the-art predictors? What are the similarities and differences between methods that interpolate data and the ones that interpolate parameters?

This workshop brings together researchers and users of interpolation regularizers to foster research and discussion to advance and understand interpolation regularizers. This inaugural meeting will have no shortage of interactions and energy to achieve these exciting goals. We are reserving a few complimentary workshop registrations for accepted paper authors who would otherwise have difficulty attending. Please reach out if this applies to you.

Suggested topics include, but are not limited to the intersection between interpolation regularizers and:

Schedule

08:15 - 08:30 -  Check-in, setup of posters

08:30 - 08:45 -  Opening remarks


08:45 - 09:30 -  Youssef Mroueh on Interpolating for fairness

09:30 - 10:15 -  Sanjeev Arora on Using Interpolation to provide privacy in Federated Learning settings

10:15 - 11:00 -  Chelsea Finn on Repurposing Mixup for Robustness and Regression

11:00 - 12:00 -  Panel discussion with Chelsea, Sanjeev, Youssef, and external panelists (Hongyi Zhang, Kilian Weinberger, Dustin Tran)

12:00 - 12:30 -  Contributed spotlights (6 x 5 minutes)


12:30 - 14:00 -  Lunch breakouts with random mixing group and organizers


14:00 - 14:45 -  Kenji Kawaguchi on The developments of the theory of Mixup

14:45 - 15:30 -  Alex Lamb on Latent Data Augmentation for Improved Generalization

15:30 - 16:15 -  Gabriel Ilharco on Robust and accurate fine-tuning by interpolating weights

16:15 - 17:00 -  Panel discussion with Kenji, Alex, Gabriel, and external panelists (Mikhail Belkin)


17:00 - 17:45 -  Poster Session


17:45 - 18:00 -  Closing Remarks 

Accepted papers

Call for papers

Authors are invited to submit short papers with up to 4 pages, but unlimited number of pages for references and supplementary materials. The submissions must be anonymized as the reviewing process will be double-blind. Please use the NeurIPS template for submissions..


This is a non-archival workshop. To foster discussion as much as possible, we welcome both new submissions and works that have been already published during the COVID-19 pandemic. The venue of publication should be clearly indicated during submission for such papers. 


Submission Link: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/INTERPOLATE

Invited speakers

Stanford University

Repurposing Mixup for Robustness and Regression

Princeton University

Interpolation to provide privacy in Federated Learning

NUS

On the developments of the theory of Mixup

Microsoft Research

Latent Data Augmentation for Improved Generalization

IBM Research

Interpolating for fairness

University of Washington

Robust and accurate fine-tuning by interpolating weights

Organizers

Google Research

Mila & Aalto University

NVIDIA Research

FAQ

Q. Can I submit work that is under review in a different venue?

Yes, this is a non-archival workshop.

Program committee

Abhishek Sinha (Stanford University)

Badr Youbi Idrissi (Facebook)

Chengzhi Mao (Columbia University)

Christopher Beckham (Polytechnique Montreal)

Daniel Y Fu (Stanford University)

Diane Bouchacourt (Facebook AI Research)

Fabrice Y Harel-Canada (University of California Los Angeles)

Felix Wu (ASAPP Inc.)

Harshavardhan Sundar (Amazon)

Herve Jegou (Facebook)

Hongyang Ryan Zhang (Computer Science, Northeastern University)

Hongyu Guo (University of Ottawa)

Hugo Touvron (Facebook)

Jang-Hyun Kim (Seoul National University)

Jihong Park (Deakin University)

Jonas Ngnawe (Université Laval)

Karsten Roth (University of Tuebingen)

Kimin Lee (Google)

Marianne Abemgnigni Njifon (The University of Goettingen)

Mert Bülent Sarıyıldız (Naver Labs Europe)

Mohammad Kachuee (Amazon)

Nishant Rai (Stanford University)

Shayan Fazeli (University of California, Los Angeles)

Shu Kong (Carnegie Mellon University)

Srijan Das (University of North Carolina at Charlotte)

Tao Sun (State University of New York, Stony Brook)

Tong He (Amazon)

Wang Chen (Google)

Xiang Li (Nanjing University of Science and Technology)

Xiaoqing Tan (University of Pittsburgh)

Xindi Wu (CMU Carnegie Mellon University)

Yaodong Yu (Electrical Engineering & Computer Science Department, University of California Berkeley)

Yu-Xiong Wang (School of Computer Science, University of Illinois at Urbana-Champaign)