Friday, Dec 13th / Saturday, Dec 14th - 2019

   CfP           Schedule

Higher-order methods, such as Newton, quasi-Newton and adaptive gradient descent methods, are extensively used in many scientific and engineering domains. At least in theory, these methods possess several nice features: they exploit local curvature information to mitigate the effects of ill-conditioning, they avoid or diminish the need for hyper-parameter tuning, and they have enough concurrency to take advantage of distributed computing environments. Researchers have even developed stochastic versions of  higher-order methods, that feature speed and scalability by incorporating curvature information in an economical and judicious manner. However, often higher-order methods are “undervalued.”

This workshop will attempt to shed light on this statement. Topics of interest include --but are not limited to-- second-order methods, adaptive gradient descent methods, regularization techniques, as well as techniques based on higher-order derivatives. This workshop can bring machine learning and optimization researchers closer, in order to facilitate a discussion with regards to underlying questions such as the following:


- Why are they not omnipresent?

- Why are higher-order methods important in machine learning, and what advantages can they offer?

- What are their limitations and disadvantages?

- How should (or could) they be implemented in practice? 


Speakers

               Coralia Cartis
                Don Goldfarb
                Elad Hazan
                James Martens
                Katya Scheinberg
                Stephen Wright

          Organizers
                Albert S. Berahas (albertberahas@lehigh.edu)
                Anastasios Kyrillidis (anastasios@rice.edu)
                Michael W Mahoney (mmahoney@stat.berkeley.edu)
                Fred Roosta (fred.roosta@uq.edu.au)