15:00 (CET)
University of Cincinnati
Solving integer robust optimization problems with integer decision-dependent uncertainty sets
Abstract:
We study a class of decision-making problems under uncertainty, where the planner hedges against the worst-case realization in an integer uncertainty set that has a functional dependence on the decisions of the planner. We propose three methods to reformulate and solve these problems: the first is based on a single-level semi-infinite reformulation, where the dependence of the uncertainty on the decisions is modeled with auxiliary binary variables; the second is based on a single-follower bilevel reformulation; and the third one is a multiple-follower bilevel reformulation where each robust constraint is associated with a follower. We provide additional enhancements to strengthen the linear programming relaxation of the single-level formulation and propose a column and constraint generation algorithm to solve it. The multiple-follower bilevel formulation is further transformed into a multicommodity flow single-level mixed-integer problem by exploiting recent advancements in decision diagrams. We perform numerical experiments that show the column and constraint generation algorithm can solve small- and medium- scale instances within a few minutes. This method is outperformed by the multicommodity flow formulation when dealing with problems with a single robust constraint. We complement our experiments by using the column and constraint generation algorithm to solve a constrained shortest path problem with maintenance decisions and uncertain arc failures. The proposed method can solve instances of this problem with hundreds of nodes and thousands of arcs in a few minutes.
Bio:
Leonardo Lozano is an associate professor in the Operations, Business Analytics & Information Systems Department at the University of Cincinnati. He received his B.Sc. degree from Universidad de los Andes, his M.Sc. degree from University of Florida, and his Ph.D. degree from Clemson University. His research focuses on exact algorithms for discrete optimization and has been published in Operations Research, Mathematical Programming, Transportation Science, Networks (Glover-Klingman 2020 best paper award), and INFORMS Journal on Computing, among others. His research has been funded by the Office of Naval Research, the Air Force Office of Scientific Research, and Google.
15:00 (CET)
Ben-Gurion University of the Negev
Robust Extensible Bin Packing and Revisiting the Convex Knapsack Problem
Abstract:
We study a robust extensible bin packing problem with budgeted uncertainty, under a budgeted uncertainty model where item sizes are defined to lie in the intersection of a box with a one-norm ball. We propose a scenario generation algorithm for this problem, which alternates between solving a master robust bin-packing problem with a finite uncertainty set and solving a separation problem. We first show that the separation is strongly NP-hard given solutions to the continuous relaxation of the master problem. Then, focusing on the separation problem for the integer master problem, we show that this problem becomes a special case of the continuous convex knapsack problem, which is known to be weakly NP-hard. Next, we prove that our special case when each of the functions is piecewise linear, having only two pieces, remains NP-hard. We develop a pseudo-polynomial dynamic program (DP) and a fully polynomial-time approximation scheme (FPTAS) for our special case whose running times match those of a binary knapsack FPTAS. Finally, our computational study shows that the DP can be significantly more efficient in practice compared with solving the problem with specially ordered set (SOS) constraints using advanced mixed-integer (MIP) solvers. Our experiments also demonstrate the application of our separation problem method to solving the robust extensible bin packing problem, including the evaluation of deferring the exact solution of the master problem, separating based on approximate master solutions in intermediate iterations. Finally, a case-study, based on real elective surgery data, demonstrates the potential advantage of our model compared with the actual schedule and optimal nominal schedules.
Joint work with Michael Poss (CNRS, LIRMM) and Yariv Marmor (ORT Braude College of Engineering)
Bio:
Noam Goldberg is an Associate Professor in the Department of Industrial Engineering and Management at Ben-Gurion University of the Negev. He was previously a faculty member at Bar-Ilan University, first as a Lecturer (2013–2018) and later as a tenured Senior Lecturer (2018–2024). In Fall 2019, he was a Visiting Associate Professor at Rutgers Business School in New Brunswick. He received his Ph.D. in Operations Research from Rutgers University (RUTCOR) in 2010, following an early career in system and software engineering in telecommunications. Between 2009 and 2013, he held postdoctoral research appointments at the Technion–Israel Institute of Technology, Argonne National Laboratory, and Carnegie Mellon University. He currently serves on the editorial boards of Mathematical Programming Computation and Soft Computing.
15:00 (CET)
Leiden University
Empirical Analysis of Upper Bounds of Robustness Distributions using Adversarial Attacks
Abstract:
Applications for optimization with uncertain data in practice often feature a possibility to reduce the uncertainty at a given query cost, e.g., by conducting measurements, surveys, or paying a third party in advance to limit the deviations. To model this type of applications, decision-dependent uncertainty sets are used. Assuming that uncertain cost parameters lie in bounded, closed intervals, we introduce the concept of optimization problems under controllable uncertainty (OCU) where the optimizer can shrink each of these intervals around a certain value called hedging point, possibly reducing it to a single point. Depending on whether the hedging points are known in advance or not, when the narrowing down, the underlying optimization and the realization of remaining uncertainty take place, different models arise. In some of these settings an optimizer might query a parameter solely to reduce the uncertainty for other parameters. We call this phenomenon budget deflection. In the talk, we discuss two example problem settings - one with known and one with unknown hedging points - where we handle the remaining uncertainty by the paradigm of robust optimization. We give necessary conditions for budget deflection to appear in these settings.
Joint work with Rebecca Marx, Maximilian Merkert, Tim Niemann and Sebastian Stiller.
Abstract:
In recent years, despite the remarkable performance of neural networks in various tasks, their vulnerability to adversarial perturbations has led to concerns about the reliability and security of these models. Adversarial perturbations are small, imperceptible modifications to inputs that lead to misclassifications. This rising concern is reflected in the sizeable body of scientific work investigating different kinds of attacks training neural networks to be robust against perturbations and formal verification of different robustness properties. A prominent example of such a property is local robustness, which determines for a given network and input whether the network is robust against input perturbations up to a certain magnitude. In this talk Annelot will explan how these perturbations work, how to find them and how they can be used to provide insight into the robustness of neural neworks.
Bio:
Eva Ley is a doctoral researcher at the Institute for Mathematical Optimization at TU Braunschweig, Germany. She studied mathematics at University of Trier, University of Southampton and TU Munich. Her work focuses on bilevel and robust optimization that include discrete problems.
Bio:
Annelot is a 4th-year PhD student at Leiden University and currently a visiting researcher at Oxford University. Her research focuses on Neural Network Verification and methods for quantifying the robustness of neural networks, with a particular interest in formal verification techniques and large-scale empirical evaluation. Beyond her research, Annelot has contributed to academic workshops and seminars on trustworthy AI in the TAILOR project.
17:00 (CET)
Abstract:
Many applications require solving a family of optimization problems, indexed by some hyperparameter vector, to obtain an entire solution map (or solution path in the case of a single hyperparameter). Traditional approaches proceed by discretizing and solving a series of optimization problems. This talk presents an alternative, and often computationally advantageous approach, based on learning. Our method parametrizes the solution map within a given function class and solves a single stochastic optimization problem. When using constant step-size stochastic gradient descent, the uniform error of our learned solution map relative to the true map exhibits linear convergence to a constant that diminishes as the expressiveness of the function class increases. In the special case of a one-dimensional hyperparameter and under sufficient smoothness, we present even stronger results for the learning approach that demonstrate complexity improvements over the best known results for uniform discretization. We complement our theoretical results with evidence of strong numerical performance on imbalanced binary classification and portfolio optimization examples. Time permitting, we discuss some extensions and related work on parametric constrained robust optimization problems.
Bio:
Paul Grigas is an associate professor of Industrial Engineering and Operations Research at the University of California, Berkeley. Paul’s research interests are broadly in optimization, machine learning, and data-driven decision-making, with particular emphasis on contextual stochastic optimization and algorithms at the interface of machine learning and continuous optimization. Paul’s research is funded by the National Science Foundation, and he is affiliated with the NSF Artificial Intelligence (AI) Research Institute for Advances in Optimization (AI4OPT). Paul was awarded 1st place in the 2020 INFORMS Junior Faculty Interest Group (JFIG) Paper Competition and the 2015 INFORMS Optimization Society Student Paper Prize. He received his B.S. in Operations Research and Information Engineering (ORIE) from Cornell University in 2011, and his Ph.D. in Operations Research from MIT in 2016.
15:00 (CET)
Saint-Gobain Research Paris
Finite Adaptability in Robust Optimization: Asymptotic Optimality and Tractability
Abstract:
The k-adaptability problem was introduced by Bertsimas and Caramanis (2010) to approximate two-stage mixed-integer linear robust optimization problems that are typically intractable. It models the pre-computation, in the first stage, of k candidate values for the second-stage variable. Once the uncertainty is revealed, the second stage consists of selecting one of the k pre-computed value that satisfies the constraints and minimizes the objective.
By fixing the dimensions of the variables, we derive a strongly quartic time algorithm to solve a case of k-adaptability with continuous recourse and 1-dimensional uncertainty set. This comes from an application of Megiddo's parametric search (1983) to his algorithm for linear programming in fixed dimension (1984).
Moreover, we observe that the decision version of k-adaptability is an instance of a geometric hitting set problem for so-called "affine families of polyhedra".
Further techniques such as quantifier elimination in the existential theory of reals allow to derive a strongly polynomial bound for this decision version of k-adaptability when the uncertainty set is a polyhedron of any fixed dimension.
Joint work with Xavier Goaoc (Université de Lorraine) and Jean Cardinal (Université Libre de Bruxelles).
Abstract:
Finite Adaptability in Robust Optimization: Asymptotic Optimality and Tractability
Of particular importance in operations research are two-stage robust optimization problems, as they model decision-making under uncertainty with the possibility of recourse actions after uncertainty is revealed. A fundamental technique to address this problem, finite adaptability, was introduced in 2010 by Bertsimas and Caramanis. It restricts the range of recourse decisions to take only a limited number of distinct values, making the problem more tractable while still capturing some flexibility.
In this work, we make two main contributions. First, we provide a counter-example to a condition ensuring asymptotic optimality of finite adaptability proposed by Bertsimas and Caramanis, and we give a corrected condition. Second, we identify special cases of finite adaptability that can be solved in polynomial time. Our approach relies on new results in discrete geometry concerning the covering of a polytope with convex sets.
For more details, the full preprint is available here: https://arxiv.org/abs/2305.05399
Bio:
Sarah Wajsbrot is a PhD student at LORIA, Université de Lorraine in the computational geometry group GAMBLE. She earned a BSc and MSc in Mathematics from Sorbonne Université and a MSc in Mathematics specialized in optimization from Université Paris-Saclay.
Her research interests are robust multistage optimization, geometric optimization and combinatorial convexity. She uses techniques from computational geometry to derive complexity bounds and algorithms for some robust optimization problems.
Bio:
I am a research engineer at Saint-Gobain Research in Paris. I recently completed my PhD at CEDRIC–CNAM, where I worked on optimization under uncertainty applied to industrial problems. At Saint-Gobain, I develop optimization models to support decision-making in complex industrial contexts.
15:00 (CET)
Georgia Institute of Technology
Rectangularity and Duality of Distributionally Robust Markov Decision Processes
Abstract:
In this talk we discuss several approaches to the formulation of distributionally robust counterparts of Markov Decision Processes, where the transition kernels are not specified exactly but rather are assumed to be elements of the corresponding ambiguity sets. The intent is to clarify some connections between the game and static formulations of distributionally robust MDPs, and explain the role of rectangularity associated with ambiguity sets in determining these connections.
Bio:
Alexander Shapiro is the A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology. In 2013 he was awarded Khachiyan Prize of INFORMS for lifetime achievements in optimization, and in 2018 he was a recipient of the Dantzig Prize awarded by the Mathematical Optimization Society and Society for Industrial and Applied Mathematics. In 2020 he was elected to the National Academy of Engineering. In 2021 he was a recipient of John von Neumann Theory Prize awarded by INFORMS.