The Fourth Bay Area Optimization Meeting will be held Saturday, May 19, 2018, at Stanford University. The meeting will bring together leaders in variational analysis, optimization, and their applications. The event is supported by the Department of Management Science and Engineering at Stanford, Infanger Investment Technology, and the Diener-Veinott Family. Location Bishop Auditorium, Lathrop Library, 518 Memorial
Way, Stanford University.
https://campus-map.stanford.edu/index.cfm?ID=08-350 Parking is available around the Oval, on Lasuen Street (behind the Littlefield Center), on Memorial Drive (just behind the Lathrop Library), along Roth Way and along Galvez Street.Program 9:10 Welcome 9:20 Michael Saunders, Stanford University 10:00 Jong-Shi Pang, University of Southern California 10:40 Break 11:10 Javad Lavaei, University of California, Berkeley 11:50 Caroline Uhler, Massachusetts Institute of Technology 12:30 Lunch 2:10 Shiqian Ma, University of California, Davis 2:50 Walid Krichene, Google 3:30 Break 4:00 Stephen Boyd, Stanford University For Titles and Abstracts see below. Attendees are on their own for lunch. There are four main options: 1) Go downstairs (in the same building as Bishop Auditorium) to the Lathrop Cafe, which will be serving food from 10 AM to 3 PM: https://www.lathropcafe.com/ 2) Go to Tresidder Union, which is a ten minute walk through the campus. They have the following options available: Panda Express, CoHo, The Treehouse, Starbucks, The Axe & Palm and Subway: https://goo.gl/maps/g5C4bTcdYDB2 3) Go to the Coupa Cafe at the Graduate School of Business, which is a five minute walk: https://www.coupacafe.com/locations/gsb/ 4) Venture into Palo Alto… Registration There is no registration fee, but attendees are required to register before May 14 by email to Johannes Royset.Local Organizers Peter Glynn, Stanford (Chair) Richard Cottle, Stanford Yinyu Ye, Stanford Program Committee Johannes O. Royset, Naval Postgraduate School (Chair) Anil Aswani, University of California, Berkeley Richard Cottle, Stanford University Matthias Koeppe, University of California, Davis Titles and Abstracts Michael Saunders, Stanford University Algorithm NCL for Constrained Optimization Standard optimization solvers have difficulty if the active-constraint gradients are not independent at a solution. For example, problems of the form min f(x) st c(x) >= 0 (m constraints and n variables) may have more than n constraints active at a solution. Such problems arise in the modeling of tax policy (with perhaps millions of constraints and thousands of variables). Algorithm NCL solves a sequence of about 10 augmented Lagrangian subproblems with constraints c(x) + r >= 0. The extra variables make the constraints linearly independent, and the subproblem solutions converge to the required solution as r is driven to zero. Assuming second derivatives are available, NCL expands the use of interior methods of large-scale optimization.
Composite Difference-Max Programs for Modern Statistical Estimation and a Nonmonotone Majorization-Minimization Algorithm Many modern statistical estimation problems are defined by three major components: a statistical model that postulates the dependence of an output quantity on the features of the input; a loss function measuring the error between the observed output and input data; and a regularizer to control the overfitting and/or variable selection in the model. The model parameters are determined by empirical risk minimization, which involves the minimization of the loss function weighed by the model regularizer. We present a sampling version of the overall expected-value optimization problem where all three component functions may be of the difference-of-convex (dc) type and illustrate them with a host of commonly used functions, including those in piecewise affine regression and in deep learning that are defined by piecewise affine activation functions. We describe a nonmonotone majorization-minimization (MM) algorithm for solving the unified nonconvex, nondifferentiable optimization problem which is formulated as a specially structured composite dc program of the pointwise max type, and present convergence results to a directional stationary solution. An efficient semismooth Newton method is proposed to solve the dual of the MM subproblems. Numerical results are presented to demonstrate the effectiveness of the proposed algorithms and the superiority of a non-traditional piecewise affine regression model over the traditional linear model.
Javad Lavaei, University of California, Berkeley Graph-theoretic Convexification of Polynomial Optimization Problems: Theory, Numerical Algorithms, and Case Studies The area of polynomial optimization has been actively studied for years, where the goal is to find a high-quality solution using an efficient computational method. Some of the important research problems in this area are: i) how does the underlying structure of an optimization problem affect its complexity? Ii) how does sparsity help? iii) how to find a near globally optimal solution whenever it is hard to find a global minimum? iv) how to design an efficient numerical algorithm for large-scale non-convex optimization problems? v) how to deal with problems with a mix of continuous and discrete variables? In this talk, we will develop a new mathematical framework to study the above problems. Our framework rests on recent advances in graph theory and optimization, including the notions of OS-vertex sequence and treewidth, matrix completion, conic optimization, and low-rank optimization. Finally, we will study the design of near-linear time algorithms for sparse conic optimization, and offer case studies on fundamental mixed-integer optimization problems appearing in energy and transportation systems.
Caroline Uhler, Massachusetts Institute of Technology Your dreams may come true with MTP2
Shiqian Ma, University of California, Davis A Varying-Coefficient Regularized Dual Averaging Algorithm for Regularized Stochastic Optimization Regularized stochastic optimization arises in many application domains. Proximal stochastic (sub)gradient descent (PSGD) and regularized dual averaging (RDA) are two widely accepted approaches to solve regularized stochastic optimization. In practice, PSGD usually converges faster than RDA, while RDA can deal with sparse data more efficiently and promote the structure (e.g., sparsity) of the solution. There exist some efforts on combining the advantages of these two algorithms. One such attempt was the proximal follow-the-regularized-leader method (FTRL-Proximal), which found to be very successful in industry. In this talk, we propose an alternative named varying-coefficient regularized dual averaging (VC-RDA) algorithm that also combines the advantages of PSGD and RDA with convergence guarantees. Moreover, a novel adaptive scaling scheme is incorporated to further accelerate this algorithm. Numerical results indicate that our new method outperforms existing algorithms mentioned above.
Walid Krichene, Google
Continuous-time dynamics for
stochastic optimization Many optimization algorithms can be viewed as a discretization of a continuous-time process. The continuous-time point of view is often useful for simplifying the analysis, drawing connections with physics, and streamlining the design of new algorithms. In this talk, we will review results from continuous-time optimization, from the simple case of gradient flow, to more recent results on accelerated methods, and how different interpretations of acceleration motivated different heuristics (restarting, adaptive averaging) which, empirically, can significantly improve the speed of convergence. In the stochastic case, we will discuss the interaction between acceleration and noise, and their effect on the convergence rates. We conclude with a brief review of how the same tools can be applied to other problems, such as sampling from log-concave distributions and some classes of non-convex problems.
Stephen Boyd, Stanford University Object-Oriented Convex Optimization with CVXPY The presentation gives an overview of the Python-based optimization environment CVXPY for convex problems. |