BayOpt 2019


The Fifth 
Bay Area Optimization Meeting will be held Friday, May 17, 2019, at University of California, Santa Cruz. 

The meeting will bring together leaders in optimization, variational analysis, and applications. Seven lectures by experts in the field will describe state-of-the-art models and algorithms as well as real-world applications from public and private sectors.  

BayOpt 2019 is supported by a generous grant from the Air Force Office of Scientific Research. 


Location
University of California, Santa Cruz 

Baskin Engineering Building E2, Room 180 (The Simularium). 

The Baskin Engineering Building is located in the northwest corner of UCSC, on Science Hill. From the Core West parking structure, go to the street level (2nd) floor of the Parking Space, cross the street go past the Baskin Engineering Bldg 1 to Bldg 2. The Simularium is off the courtyard lying between Baskin (E1) and Engineering 2 (E2) buildings.




Program

 9:10       Welcome

 9:20       Michael Friedlander, University of British Columbia, Vancouver

10:00      Patrick Combettes, North Carolina State University

10:40      Break

11:10      Mengdi Wang, Princeton University

11:50      Michael Jordan, University of California, Berkeley

12:30      Lunch

 2:10       Matthew Carlyle, Naval Postgraduate School

 2:50       Amitabh Basu, Johns Hopkins University

 3:30       Break

 4:00       David Woodruff, University of California, Davis


Registration
There is no registration fee, but attendees are required to register before May 10 by email to Johannes Royset.

Directions and Parking

Directions to Santa Cruz

From Southern and Central California:
Take Highway 101 north to Highway 156 west to Highway 1 north. Follow Highway 1 north to Santa Cruz.

From Northern California:
Take Interstate 5 south to Interstate 80 west to Interstate 680 south, which becomes Interstate 280 north. Then take Highway 17 south to Santa Cruz, then Highway 1 north.

From San Francisco Airport:
Take Highway 101 south to Highway 85 south, to Highway 17 south to Santa Cruz, then Highway 1 north.

From San Jose Airport:
Take Interstate 880, which becomes Highway 17 south to Santa Cruz, then Highway 1 north.

From Monterey Airport:
Take Highway 1 north to Santa Cruz.

Directions to Campus

Once you're in Santa Cruz on Highway 1 north, continue as it becomes Mission Street through town. Turn right on Bay Street and follow it to the campus entrance. If you are using an online service to get directions, enter the following address for UC Santa Cruz: 1156 High Street, Santa Cruz, CA 95064. A map of UCSC campus can be found at https://maps-gis.ucsc.edu

On Campus parking

The closest parking lot to the Engineering School is Core West Structure; see https://maps-gis.ucsc.edu/printable-maps/parking-map-09182018.pdf  

Two parking attendants will be on hand at the 2nd floor of Core West parking structure from 8:00AM - 11:00AM to sell parking permits to meeting attendees. Parking permits cost $10.00. Both a picture ID and proof of registration are required at the time of permit purchases.

If you arrive between 11:00AM - 1:00PM, you can purchase your permit from the Main Entrance Kiosk located on Coolidge Drive. From 1:00pm to 5:00pm all permits must be purchased at the TAPS Sales Office. A map of both locations can be found here; https://taps.ucsc.edu/parking/parking-permits/index.html



Local Organizers
Qi Gong, University of California, Santa Cruz
Yu Zhang, University of California, Santa Cruz


Program Committee
Johannes O. Royset, Naval Postgraduate School (Chair)
Anil Aswani, University of California, Berkeley
Richard Cottle, Stanford University
Matthias Koeppe, University of California, Davis






Abstracts

Michael Friedlander, University of British Columbia, Vancouver

Title: Polar duality and atomic alignment

Abstract: The aim of structured optimization is to assemble a solution, using a given set of atoms, to fit a model to data. Polarity, which generalizes the notion of orthogonality from linear sets to general convex sets, plays a special role in a simple and geometric form of convex duality. The atoms and their implicit "duals" share a special relationship, and their participation in the solution assembly depends on a notion of alignment. This geometric perspective leads to practical algorithms for large-scale problems.

  

Patrick Combettes, North Carolina State University

Title: The pervasiveness of proximal point iterations

Abstract: The scope of the proximal point algorithm for finding a zero of a monotone operator may seem rather limited. We show that it can actually be used to devise and analyze a surprisingly broad a class of algorithms in nonlinear analysis. In particular (joint work with J.-C. Pesquet) it will be seen that the proximal point formalism provides valuable new insights into the static and asymptotic properties of deep neural networks.

 

Mengdi Wang, Princeton University

Title: Learning to control in metric space

Abstract: We study online reinforcement learning for finite-horizon deterministic control systems with arbitrary state and action spaces. Suppose that the transition dynamics and reward function is unknown, but the state and action space is endowed with a metric that characterizes the proximity between different states and actions. We provide a surprisingly simple upper-confidence reinforcement learning algorithm that uses a function approximation oracle to estimate optimistic Q functions from experiences. We prove sublinear regret that depends on the doubling dimension of the state space with respect to the given metric - which is intrinsic and typically much smaller than the appeared dimension. We also establish a matching regret lower bound. The proposed method can be adapted to work for more structured systems.

 

Michael Jordan, University of California, Berkeley

Title: On the Theory of Gradient-Based Learning: A View from Continuous Time

Abstract: Gradient-based optimization has provided the theoretical and practical foundations on which recent developments in statistical machine learning have reposed. A complementary set of foundations is provided by Monte Carlo sampling, where gradient-based methods have also been leading the way in recent years.  We explore links between gradient-based optimization algorithms and gradient-based sampling algorithms.  Although these algorithms are generally studied in discrete time, we find that fundamental insights can be obtained more readily if we work in continuous time.  A particularly striking finding is that there is a counterpart of Nesterov acceleration in the world of Langevin diffusion.

 

Matthew Carlyle, Naval Postgraduate School

Title: Defending Maximum Flows from Worst-case Attacks

Abstract: We present a reformulation of the trilevel defender-attacker-defender maximum flow problem as an s-t-cut defense problem, and provide a corresponding decomposition algorithm for solving the problem. On moderate to large problems our reformulation results in significantly smaller master problem instances than the typical flow-based formulation.  Our algorithm requires far fewer iterations and has faster solution times than the current standard nested decomposition algorithms. We provide small examples and an instance based on historical data to illustrate the formulation, and we report experimental results on examples of varying sizes and topologies.

 

Amitabh Basu, Johns Hopkins University

Title: Admissibility of solution estimators for stochastic optimization

Abstract: We look at stochastic optimization problems through the lens of statistical decision theory. In particular, we address admissibility, in the statistical decision theory sense, of the natural sample average estimator for a stochastic optimization problem (which is also known as the empirical risk minimization (ERM) rule in learning literature). It is well known that for general stochastic optimization problems, the sample average estimator may not be admissible. This is known as Stein's paradox in the statistics literature. We show in this paper that for optimizing stochastic linear functions over compact sets, the sample average estimator is admissible.