17:00 (CET)
Abstract:
Many applications require solving a family of optimization problems, indexed by some hyperparameter vector, to obtain an entire solution map (or solution path in the case of a single hyperparameter). Traditional approaches proceed by discretizing and solving a series of optimization problems. This talk presents an alternative, and often computationally advantageous approach, based on learning. Our method parametrizes the solution map within a given function class and solves a single stochastic optimization problem. When using constant step-size stochastic gradient descent, the uniform error of our learned solution map relative to the true map exhibits linear convergence to a constant that diminishes as the expressiveness of the function class increases. In the special case of a one-dimensional hyperparameter and under sufficient smoothness, we present even stronger results for the learning approach that demonstrate complexity improvements over the best known results for uniform discretization. We complement our theoretical results with evidence of strong numerical performance on imbalanced binary classification and portfolio optimization examples. Time permitting, we discuss some extensions and related work on parametric constrained robust optimization problems.
Bio:
Paul Grigas is an associate professor of Industrial Engineering and Operations Research at the University of California, Berkeley. Paul’s research interests are broadly in optimization, machine learning, and data-driven decision-making, with particular emphasis on contextual stochastic optimization and algorithms at the interface of machine learning and continuous optimization. Paul’s research is funded by the National Science Foundation, and he is affiliated with the NSF Artificial Intelligence (AI) Research Institute for Advances in Optimization (AI4OPT). Paul was awarded 1st place in the 2020 INFORMS Junior Faculty Interest Group (JFIG) Paper Competition and the 2015 INFORMS Optimization Society Student Paper Prize. He received his B.S. in Operations Research and Information Engineering (ORIE) from Cornell University in 2011, and his Ph.D. in Operations Research from MIT in 2016.
15:00 (CET)
Saint-Gobain Research Paris
Finite Adaptability in Robust Optimization: Asymptotic Optimality and Tractability
Abstract:
The k-adaptability problem was introduced by Bertsimas and Caramanis (2010) to approximate two-stage mixed-integer linear robust optimization problems that are typically intractable. It models the pre-computation, in the first stage, of k candidate values for the second-stage variable. Once the uncertainty is revealed, the second stage consists of selecting one of the k pre-computed value that satisfies the constraints and minimizes the objective.
By fixing the dimensions of the variables, we derive a strongly quartic time algorithm to solve a case of k-adaptability with continuous recourse and 1-dimensional uncertainty set. This comes from an application of Megiddo's parametric search (1983) to his algorithm for linear programming in fixed dimension (1984).
Moreover, we observe that the decision version of k-adaptability is an instance of a geometric hitting set problem for so-called "affine families of polyhedra".
Further techniques such as quantifier elimination in the existential theory of reals allow to derive a strongly polynomial bound for this decision version of k-adaptability when the uncertainty set is a polyhedron of any fixed dimension.
Joint work with Xavier Goaoc (Université de Lorraine) and Jean Cardinal (Université Libre de Bruxelles).
Abstract:
Finite Adaptability in Robust Optimization: Asymptotic Optimality and Tractability
Of particular importance in operations research are two-stage robust optimization problems, as they model decision-making under uncertainty with the possibility of recourse actions after uncertainty is revealed. A fundamental technique to address this problem, finite adaptability, was introduced in 2010 by Bertsimas and Caramanis. It restricts the range of recourse decisions to take only a limited number of distinct values, making the problem more tractable while still capturing some flexibility.
In this work, we make two main contributions. First, we provide a counter-example to a condition ensuring asymptotic optimality of finite adaptability proposed by Bertsimas and Caramanis, and we give a corrected condition. Second, we identify special cases of finite adaptability that can be solved in polynomial time. Our approach relies on new results in discrete geometry concerning the covering of a polytope with convex sets.
For more details, the full preprint is available here: https://arxiv.org/abs/2305.05399
Bio:
Sarah Wajsbrot is a PhD student at LORIA, Université de Lorraine in the computational geometry group GAMBLE. She earned a BSc and MSc in Mathematics from Sorbonne Université and a MSc in Mathematics specialized in optimization from Université Paris-Saclay.
Her research interests are robust multistage optimization, geometric optimization and combinatorial convexity. She uses techniques from computational geometry to derive complexity bounds and algorithms for some robust optimization problems.
Bio:
I am a research engineer at Saint-Gobain Research in Paris. I recently completed my PhD at CEDRIC–CNAM, where I worked on optimization under uncertainty applied to industrial problems. At Saint-Gobain, I develop optimization models to support decision-making in complex industrial contexts.
15:00 (CET)
Georgia Institute of Technology
Rectangularity and Duality of Distributionally Robust Markov Decision Processes
Abstract:
In this talk we discuss several approaches to the formulation of distributionally robust counterparts of Markov Decision Processes, where the transition kernels are not specified exactly but rather are assumed to be elements of the corresponding ambiguity sets. The intent is to clarify some connections between the game and static formulations of distributionally robust MDPs, and explain the role of rectangularity associated with ambiguity sets in determining these connections.
Bio:
Alexander Shapiro is the A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology. In 2013 he was awarded Khachiyan Prize of INFORMS for lifetime achievements in optimization, and in 2018 he was a recipient of the Dantzig Prize awarded by the Mathematical Optimization Society and Society for Industrial and Applied Mathematics. In 2020 he was elected to the National Academy of Engineering. In 2021 he was a recipient of John von Neumann Theory Prize awarded by INFORMS.