Understanding and Improving Generalization in Deep Learning
Schedule
8:30 -8:40 Opening Remarks
8:40 - 9:20 Invited Speaker: Daniel Roy "Progress on Nonvacuous Generalization Bounds"
9:20 - 9:50 Invited Speaker: Chelsea Finn "Training for Generalization"
9:50 - 10:05 Spotlight Talk: "A Meta-Analysis of Overfitting in Machine Learning"
10:05 - 10:20 Spotlight Talk: "Uniform convergence may be unable to explain generalization in deep learning"
10:20 - 10:40 Break and Poster Session
10:40 - 11:10 Invited Speaker: Sham Kakade "Prediction, Learning, and Memory"
11:10 - 11:40 Invited Speaker: Mikhail Belkin "A Hard Look at Generalization and its Theories"
11:40 - 11:55 Spotlight Talk: "Towards Task and Architecture-Independent Generalization Gap Predictors"
11:55 - 12:10 Spotlight Talk: "Data-Dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation"
12:10 - 13:30 Lunch and Poster Session
13:30 - 14:00 Invited Speaker: Aleksander Mądry "Are All Features Created Equal?"
14:00 - 14:30 Invited Speaker: Jason Lee "On the Foundations of Deep Learning: SGD, Overparametrization, and Generalization"
14:30 - 14:45 Spotlight Talk: "Towards Large Scale Structure of the Loss Landscape of Neural Networks"
14:45 - 15:00 Spotlight Talk: "Zero-Shot Learning from scratch: leveraging local compositional representations"
15:00 - 15:30 Break and Poster Session
15:30 - 16:30 Panel Discussion (Moderator: Nati Srebro)
16:30 - 16:45 Spotlight Talk: "Overparameterization without Overfitting: Jacobian-based Generalization Guarantees for Neural Networks"
16:45 - 17:00 Spotlight Talk: "How Learning Rate and Delay Affect Minima Selection in Asynchronous Training of Neural Networks: Toward Closing the Generalization Gap"
17:00 - 18:00 Poster Session