Workshop Overview

Non-convex optimization is ubiquitous in machine learning. In general, reaching the global optima of these problems is NP-hard and in practice,
local search methods such as gradient descent can get stuck in spurious local optima and suffer from poor convergence. Over the last few years,
tremendous progress has been made in establishing theoretical guarantees for many of the non-convex optimization problems. While there are worst-case instances which are computationally hard to solve, focus has shifted in characterizing transparent conditions for cases which are tractable. In many instances, these conditions turn out to be mild and natural for machine learning applications.

There are certainly many challenging open problems in the area of non-convex optimization. While guarantees have been established in individual
instances, there is no common unifying theme of what makes non-convex problem tractable. Many challenging instances such as optimization for
training multi-layer neural networks or analyzing novel regularization techniques such as dropout for non-convex optimization still remain wide
open. On the practical side, conversations between theorists and practitioners can help identify what kind of conditions are reasonable for specific applications, and thus lead to the design of practically motivated algorithms for non-convex optimization with rigorous guarantees.

This workshop will fill a very important gap in bringing researchers from disparate communities and bridging the gap between theoreticians and
practitioners. To facilitate discussion between theorists and practitioners, we aim to make the workshop easily accessible to people currently
unfamiliar with the intricate details of these methods. The workshop will happen on Saturday December 12th 8:30AM-6:30 PM.

Organizers:
Animashree Anandkumar, UC Irvine
Kamalika Chaudhuri, UC San Diego
Percy Liang, Stanford
U N NiranjanUC Irvine