Workshop Overview

Almost all machine learning problems require solving nonconvex optimization. This includes deep learning, Bayesian inference, clustering, and so on. The objective functions in all these instances are highly non-convex, and it is an open question if there  are provable, polynomial time algorithms for these problems under realistic assumptions.

A diverse set of approaches have been devised to solve nonconvex problems in a variety of approaches. They range from simple local search approaches such as gradient descent and alternating minimization to more involved frameworks such as simulated annealing, continuation method, convex hierarchies, Bayesian optimization, branch and bound, and so on. Moreover, for solving special class of nonconvex problems there are efficient methods such as quasi convex optimization, star convex optimization, submodular optimization, and matrix/tensor decomposition.

There has been a burst of recent research activity in all these areas. This workshop brings researchers from these vastly different domains and hopes to create a dialogue among them.  In addition to the theoretical frameworks, the workshop will also feature practitioners, especially in the area of deep learning who are developing new methodologies for training large scale neural networks. The result will be a cross fertilization of ideas from diverse areas and schools of thought.


Awards:
Best Theoretical Work Award: $300 Cash from Google went to Tengyu Ma (Princeton), Rong Ge (Duke) for Understanding the Landscape of Over-complete Tensors Decomposition [Paper].
Best Applied Work Award: Titan X GPU from NVidia went to Caglar Gulcehre (University of Montreal), Marcin Moczulski (Oxford), Francesco Visin (Politecnico di Milano), Yoshua Bengio (University of Montreal) for Mollifying Networks [Paper].


Important Dates:
Submission Deadline:  September 20, 2016
Acceptance Notification:  October 3, 2016
Final Paper Due:  December 1, 2016
Workshop Date:  December 9, 2016


Organizers:
Animashree Anandkumar, UC Irvine
Percy Liang, Stanford
Hossein Mobahi, Google Research


Photo credit: Suvrit Sra


Sponsors


http://research.google.com