Understanding and Improving Generalization in Deep Learning

14 June 2019, Long Beach, California, USA

Overview

Generalization is a cornerstone of machine learning, and one of the keys to its practical success. Deep networks generalize well in supervised learning tasks even with over-parameterization, and this is a reason for their enormous impact. In spite of recent research efforts in this direction, the problem of understanding generalization remains far from solved.

In the most basic context of deep supervised learning, generalization is the gap between error on the training and test set drawn from the same distribution. Current research challenges include understanding the data-dependency of the gap, the role of increasing network depth, and the role of implicit and explicit regularization.

The problem becomes harder when test and train distributions differ. A mathematically well-defined setup is that of adversarial examples which has seen a flurry of recent research. When the test distribution shifts even more (large norms in input space), such as for domain adaptation problems, even the mathematical definition of generalization still eludes the community. An interesting research question is what inductive biases in our current models cause them to be sensitive to perturbations, and designing better biases.

Going beyond supervised learning, the formulation and measurement of generalization in the context of deep unsupervised and self-supervised learning, transfer learning and reinforcement learning is gaining momentum. However, well-accepted definitions and empirical practice are still wide-open research questions.

In this workshop, we bring together prominent researchers in generalization theory and practice to discuss the current state of the art and promising future research directions in all areas of deep learning.

The workshop will cover the following research areas:

● Implicit and explicit regularization, and the role of optimization algorithms in generalization

● Architecture choices that improve generalization

● Empirical approaches to understanding generalization

● Generalization bounds and empirical criteria to evaluate generalization bounds

● Robustness: generalizing to distributional shift a.k.a dataset shift

● Generalization in the context of representation/unsupervised learning, transfer learning and reinforcement learning: definitions and empirical approaches

Call for Papers is here.

Our (awesome) invited speakers.

Organizing Committee

Hossein Mobahi

(Google AI)

Dilip Krishnan

(Google AI)

Peter Bartlett

(UC Berkeley)

Nati Srebro

(TTIC & Google AI)

Dawn Song

(UC Berkeley)