Modern Trends in Nonconvex Optimization for Machine Learning
Schedule
8:50 - 9:00 Opening Remarks
9:00 - 9:30 Invited Speaker: Sham Kakade "Provably Correct Automatic Subdifferentiation"
9:30 - 10:00 Invited Speaker: Elad Hazan "Adaptive Regularization Strikes Back"
10:00 - 10:30 Poster Session 1 (Coffee Break)
10:30 - 11:00 Invited Speaker: Yoshua Bengio "On stochastic gradient descent, flatness and generalization"
11:00 - 11:30 Invited Speaker: Praneeth Netrapalli "How to escape saddle points efficiently?"
11:30 - 12:00 Spotlight Session 1:
- Yuxin Chen, Yuejie Chi, Jianqing Fan, and Cong Ma "Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval"
- Simon S. Du, Wei Hu, and Jason D. Lee. "Algorithmic Regularization in Learning Multi-Layer Homogeneous Models: The Auto-Balancing Effect"
- Jordan Frecon, Saverio Salzo, and Massimiliano Pontil. "Inferring the Group Lasso Structure via Bilevel Optimization"
- Benjamin Dubois, Jean-François Delmas, Guillaume Obozinski. "Fast Algorithms for Sparse Reduced-Rank Regression"
- Zhanhong Jiang, Aditya Balu, Chinmay Hegde, Soumik Sarkar. "Incremental Consensus based Collaborative Deep Learning"
- Matthew Staib, Bryan Wilder, and Stefanie Jegelka. "Distributionally Robust Submodular Maximization"
12:00 - 1:30 Lunch Break
1:30 - 2:00 Invited Speaker: Dmitriy Drusvyatskiy "Convergence rates of stochastic algorithms for nonsmooth nonconvex problems"
2:00 - 2:30 Invited Speaker: Sergey Levine "Meta-Learning of Gradient-Based Learners"
2:30 - 3:00 Spotlight Session 2:
- Jonas Kohler, Hadi Daneshmand, Aurelien Lucchi, Ming Zhou, Klaus Neymeyr, Thomas Hofmann. "Provably Fast Convergence of Batch Normalization on Learning Halfspaces under Gaussian Inputs"
- Pavel Dvurechensky. "Gradient Method With Inexact Oracle for Composite Non-Convex Optimization"
- Yue Sun, Maryam Fazel. "Escaping saddle points efficiently in equality-constrained optimization problems"
- Dylan J. Foster, Ayush Sekhari, Karthik Sridharan. "Uniform Convergence of Gradients for Non-Convex Learning and Optimization"
- Dar Gilboa, Sam Buchanan, John Wright. "Efficient Dictionary Learning with Gradient Descent"
3:00 - 3:30 Poster Session 2 (Coffee Break)
3:30 - 4:00 Invited Speaker: Coralia Cartis "Stochastic variants of nonconvex optimization methods, with complexity guarantees"
4:00 - 5:00 Panel Discussion
5:00 - 6:00 Poster Session 3
*Best paper award will be announced at the beginning of panel discussion.