ICML 2021 Workshop
Overparameterization: Pitfalls & Opportunities
July 24, 2021
Note: The workshop will be virtual.
Overview
Modern machine learning models are often highly overparameterized. The prime examples of late are neural network architectures that can achieve state-of-the-art performance while having many more parameters than the number of training examples. Despite these developments, the consequences of overparameterization are not fully understood. Worst-case theories of learnability do not have explanatory or predictive power in this regime. Overparameterized models have been found to exhibit "benign overfitting" as well as double (and multiple) descent behavior, extending beyond the range of existing classical statistical phenomena. Other new phenomena have yet to be discovered. Some of these effects depend on the properties of data, but we have only simplistic approaches for incorporating this aspect of the problem. In light of recent progress and rapidly shifting understanding in the community, we believe that the time is ripe for a workshop focusing on understanding overparameterization from multiple angles.
We wish for this workshop to serve as a platform to foster discussion and scientific consensus about the implications of overparameterization. As it is has been of relevance to machine learning practice for the past several years, we think a workshop with a specific focus on overparameterization is timely for the community. We hope this will lead to a more unified view, identification of new questions, and collaborations.
We invite contributions on a range of topics, including, but not limited to:
Studies of benign overfitting
Multiple descent risk curves
Overparameterized models beyond neural networks
Effects of model compression
Memorization in overparametrized models
Conditions under which overparameterization hurts generalization
Optimization methods tailored for overparametrized models
Robustness of overparametrized models
Implicit bias/regularization of training methods for overparameterized models
Interplay between data and overparameterization
Empirical studies of the impact of overparameterization
Invited Speakers
Peter Bartlett
UC Berkeley
Misha Belkin
UCSD
Suriya Gunasekar
Microsoft Research
Tengyuan Liang
University of Chicago
Andrea Montanari
Stanford
Lenka Zdeborova
EPFL
Organizers
Yasaman Bahri
Google Research
Quanquan Gu
UCLA
Amin Karbasi
Yale University
Hanie Sedghi
Google Research
We thank Daniel Roy (University of Toronto) for his contributions to the workshop proposal.
Program Committee
Ben Adlam (Google Brain), Chiyuan Zhang (Google), Difan Zou (UCLA), Kartik Sreenivasan (University of Wisconsin-Madison), Lechao Xiao (Google Brain), Lin Chen (Simons Institute for the Theory of Computing), Rahim Entezari (TU Graz), Spencer Frei (UCLA), Vaishnavh Nagarajan (CMU), Yuan Cao (UCLA), Zixiang Chen (UCLA).