Modern Trends in Nonconvex Optimization for Machine Learning

The workshop is on July 14, 2018, at Room A6, Stockholmsmässan, Stockholm Sweden.

Overview

Nonconvex optimization has become a core topic in modern machine learning (ML). A wide variety of ML models and subfields leverage nonconvex optimization, including deep learning, reinforcement learning, matrix/tensor factorization models, and probabilistic (Bayesian) models. Classically, nonconvex optimization was widely believed to be intractable due to worst-case complexity results. However, recently the community has seen rapid progress in both the empirical training of nonconvex models and the development of their theoretical understanding.


Advances on the theoretical side range from understanding the landscape of various nonconvex models to efficient algorithms in the offline, stochastic, parallel and distributed settings utilizing zeroth, first, or second-order information. Recent guarantees not only ensure finding stationary point (points where the gradient vanishes), but also attack problems raised by spurious local minima and saddle points (locally and globally). In parallel, the field has also witnessed significant progress driven by practitioners. Novel nonconvex models such as residual networks and LSTMs, as well as methods such as batch normalization and ADAM for accelerating their training, have become state-of-the-art empirical methods.


This workshop will bring together experts in machine learning, artificial intelligence, and optimization to tackle some of the conceptual and practical bottlenecks that are hindering progress and to explore new directions. Examples include (but are not limited to) implicit regularization, landscape design, homotopy methods, adaptive algorithms and robust optimization. The workshop hopes to facilitate cross-domain discussion and debate on topics such as these and to reshape this rapidly progressing field.



Facebook Page: https://www.facebook.com/nonconvex/