15:00 (CET)
University of Amsterdam
K-adaptability approach to proton radiation therapy robust treatment planning
Abstract:
Uncertainties can significantly compromise proton therapy quality, such as setup and range errors. These uncertainties are taken into account in the planning procedure. A discrete uncertainty set is often constructed to represent different uncertainty scenarios. A min-max robust optimization approach is then utilized to optimize the worst-case performance of a radiation therapy plan against the uncertainty set. However, the min-max approach can be too conservative as a single plan has to account for the entire uncertainty set. K-adaptability is a novel approach to robust optimization which covers the uncertainty set with multiple (K) solutions, reducing the conservativeness. Solving K-adaptability to optimality is known to be computationally intractable. To that end, we developed a novel and efficient K-adaptability heuristic that iteratively clusters the scenarios based on plan-scenario performance for the proton radiation therapy planning problem.
Abstract:
Growing integration of learning-based controllers in autonomous systems raises serious concerns about reliability and safety. These controllers are particularly vulnerable to distribution shifts arising from discrepancies between data sources encountered during training and deployment. Such discrepancies can precipitate serious performance degradation or even catastrophic failure in high-stakes applications such as autonomues driving and smart power grids. While classical robust control frameworks (e.g., H∞ control) mitigate such risks through worst-case analysis, they often disregard statistical structure in the data, leading to overly conservative solutions. Compounding the challenge, distribution shifts in such dynamic settings frequently exhibit long-range temporal correlations induced by feedback interactions with evolving environments.
To address these limitations, we first present a distributionally robust control apprach to tackle with temporally correlated distribution shifts over finite horizons. By modeling uncertainty via a Wasserstein ball around a nominal distribution, this formulation yields an equivalent semidefinite program (SDP). Although the resulting controller, informed by empirical data, judiciously balances robustness and performance, the computational complexity of this SDP scales poorly with the time horizon, limiting its practical use for real-time implementation. To overcome this scalability bottleneck, we introduce the non-rational infinite-horizon control framework: a unified approach for synthesizing scalable, stabilizing state-space controllers for a broad class of infinite-horizon control problems. We illustrate the mechanics of this framework through its application to the infinite-horizon extension of the distributionally robust controller. Beyond this case, we further showcase the versatility and effectiveness of this framework by deriving exact solutions to several canonical problems in robust control, including the mixed H2/H∞ synthesis
Bio:
Zihang Qiu is a fourth-year PhD candidate at the University of Amsterdam. His research interest lies in treatment plan optimization for proton radiation therapy with a focus on uncertainty handling and fast automated treatment plan generation.
Bio:
Taylan Kargin is postdoctoral researcher at the Massachusetts Institute of Technology (MIT), with a joint affiliation at the Laboratory for Information and Decision Systems (LIDS) and the Computer Science and Artificial Intelligence Laboratory (CSAIL). Taylan obtained his PhD in Electrical Engineering at Caltech in 2025, under the supervision of Prof. Babak Hassibi. His research is broadly centered on the interplay between control theory, optimization, and machine learning, mainly focusing on the robust and adaptive control of dynamical systems under uncertainty.
15:00 (CET)
NUS Business School
Network Flow Models for Robust Binary Optimization with Selective Adaptability
Abstract:
Adaptive robust optimization problems have received significant attention in recent years, but remain notoriously difficult to solve when recourse decisions are discrete in nature. In this talk, we propose new reformulation techniques for adaptive robust binary optimization (ARBO) problems with objective uncertainty. Our main contribution revolves around a collection of exact and approximate network flow reformulations for the ARBO problem, which we develop by building upon ideas from the decision diagram literature. Our proposed models can generate feasible solutions, primal bounds and dual bounds, while their size and approximation quality can be precisely controlled through user-specified parameters. Furthermore, these models are easy to implement and can be solved directly with standard off-the-shelf solvers. We show that our models can generate high-quality solutions and dual bounds in significantly less time than popular benchmark methods.
Bio:
Ian Zhu is an Assistant Professor in the Department of Analytics and Operations at NUS Business School. His research interests broadly lie in the design of algorithms for solving various optimization problems. His research has been published in journals such as Operations Research, MSOM, and INFORMS Journal on Computing, and has been recognized in several paper awards, including the Pierskalla Best Paper Award, Innovative Applications in Analytics Award, and the MSOM Student Paper Competition.
15:00 (CET)
University of Tehran
Quadratic Optimization Through the Lens of Adjustable Robust Optimization
Abstract:
Due to the uncertainty of component demand in new product launches, we propose an adaptive robust optimization method for managing spare parts inventory. We design two time-efficient algorithms to solve large-scale models. An ASML case study demonstrates the huge potential of our model to benefit maintenance service providers of expensive equipment.
Abstract:
Quadratic optimization (QO) has been studied extensively in the literature due to its applicability in many practical problems. While practical, it is known that QO problems are generally NP-hard. So, researchers developed many approximation methods to find good solutions. In this work, we analyze QO problems using robust optimization techniques. To this end, we first show that any QO problem can be reformulated as a disjoint bi-convex QO problem. Then, we provide an equivalent adjustable robust optimization (ARO) reformulation and leverage the methods available in the literature on ARO to approximate this reformulation. More specifically, we show that using a so-called decision rule technique to approximate the ARO reformulation is equivalent to using a reformulation-linearization technique on the original QO problem. Additionally, we design an algorithm that can find a close-to-optimal solution based on our new reformulations. Our numerical results demonstrate the efficiency of our algorithm to find near-optimal solutions, particularly for large-sized instances, compared with off-the-shelf solvers and state-of-art approaches
Bio:
Zhao Kang recently completed his Ph.D. at Eindhoven University of Technology and is an incoming senior research scientist at Huawei. His research focuses on robust optimization approaches for inventory management, with applications in collaboration with ASML. He received the 2024 Best Student Paper Award from the POMS Supply Chain Management College.
Bio:
Abbas Khademi is a Non-Resident Researcher at the School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran. He received his PhD in Applied Mathematics from the University of Tehran in 2024. During his doctoral studies, he was a Visiting Researcher at Eindhoven University of Technology, Netherlands. His research primarily focuses on addressing nonlinearity and uncertainty in optimization problems.
15:00 (CET)
University of Chicago Booth School of Business
Distributionally Robust Linear Quadratic Control
Abstract:
Linear-Quadratic-Gaussian (LQG) control is a fundamental control paradigm that is studied in various fields such as engineering, computer science, economics, and neuroscience. It involves controlling a system with linear dynamics and imperfect observations, subject to additive noise, with the goal of minimizing a quadratic cost function for the state and control variables. In this work, we consider a generalization of the discrete-time, finite-horizon LQG problem, where the noise distributions are unknown and belong to Wasserstein ambiguity sets centered at nominal (Gaussian) distributions. The objective is to minimize a worst-case cost across all distributions in the ambiguity set, including non-Gaussian distributions. Despite the added complexity, we prove that a control policy that is linear in the observations is optimal for this problem, as in the classic LQG problem. We propose a numerical solution method that efficiently characterizes this optimal control policy. Our method uses the Frank-Wolfe algorithm to identify the least-favorable distributions within the Wasserstein ambiguity sets and computes the controller's optimal policy using Kalman filter estimation under these distributions.
Bio:
Bahar is an Assistant Professor of Operations Management at the University of Chicago Booth School of Business. In 2024, she completed her PhD in the Risk Analytics and Optimization Lab at École polytechnique fédérale de Lausanne (EPFL) in Switzerland. In 2018, she obtained her Bachelor of Science degree in Electrical and Electronics Engineering from the Middle East Technical University.
15:00 (CET)
University of Bergamo
Sampling and Bounding Methods for Multistage Robust and Distributionally Robust Optimization
Abstract:
This talk presents two approaches to addressing multistage optimization problems under uncertainty: multistage robust optimization (RO) and multistage distributionally robust optimization (DRO).
In the first part of the talk, we focus on multistage robust optimization problems of the minimax type, where the uncertainty set is modeled as a Cartesian product of compact stagewise sets [1]. To manage the inherent intractability of these problems, we propose an approximation strategy based on random sampling of the uncertainty space. This leads to two main questions: (1) how to quantify the error introduced by this sampling approach as a function of the sample size, and (2) whether the optimal value of the sampled problem converges to the true robust optimal value as the number of samples increases. We provide probabilistic guarantees on constraint violations and develop a sequence of lower bounds for the original robust problem. Numerical results on a multistage inventory management problem show that the method is effective for problems with a small number of stages, while highlighting computational limitations for larger-scale instances.
The second part of the talk focuses on multistage mixed-integer distributionally robust optimization. In this setting, uncertainty is represented through conditional ambiguity sets defined on scenario trees, constrained to remain close to nominal conditional distributions according to some measure of similarity/distance.
We propose a new bounding technique based on scenario grouping using the ambiguity sets associated
with various commonly used φ-divergences and the Wasserstein distance [2]. Our approach does not rely on specific structural assumptions such as linearity, convexity, or stagewise independence. The resulting bounds are applicable to a wide variety of DRO problems - multistage or two-stage, with or without integer variables, convex or nonconvex. Numerical results on a multistage production planning problem illustrate the effectiveness of the approach across different ambiguity sets, robustness levels, and partition strategies.
This talk aims to provide theoretical insights and practical methods for tackling uncertainty in multistage optimization, highlighting key aspects of both robust and distributionally robust formulations.
References:
[1] Maggioni, F., Dabbene, & Pflug, G. (2025) Multistage robust convex optimization problems: A sampling based approach, Annals of Operations Research, https://doi.org/10.1007/s10479-025-06545-4.
[2] Bayraksan, G., Maggioni, F., Faccini, D. & Yang, A. (2024) Bounds for Multistage Mixed-Integer Distributionally Robust Optimization, Siam Journal on Optimization, 34(1), 682-717.
Bio:
Francesca Maggioni is Professor of Operations Research at the Department of Management, Information and Production Engineering (DIGIP) of the University of Bergamo (Italy). Her research interests concern both methodological and applicative aspects for optimization under uncertainty. From a methodological point of view, she has developed different types of bounds and approximations for stochastic, robust and distributionally robust multistage optimization problems. She applies these methods to solve problems in logistics, transportation, energy and machine learning. On these topics she has published more than 80 scientific articles featured in peer-reviewed operations research journals. In 2021 her research has been supported as principal investigator by a grant from the Italian Ministry of Education for the project “ULTRA OPTYMAL Urban Logistics and sustainable TRAnsportation: OPtimization under uncertainTY and MAchine Learning”.
She is currently coordinating the research group CQIIA-Matnet for teaching of mathematics and its applications of the University of Bergamo and is Deputy Director of the PhD program in Applied Economics & Management of the Universities of Bergamo and Pavia.
She currently chairs the EURO Working Group on Stochastic Optimization and the AIRO Thematic Section of Stochastic Programming. She has been secretary and treasurer of the Stochastic Programming Society. She is Associate Editor of the journals Transportation Science, Computational Management Science (CMS), EURO Journal on Computational Optimization (EJCO), TOP, International Transactions in Operational Research (ITOR), Networks, RAIRO Operations Research and guest editor of several special issues in Operations Research and Applied Mathematics journals.
15:00 (CET)
Abstract:
The utilization of data is becoming paramount in addressing modern, increasingly complex, optimization problems. In this talk, I will present my vision of key aspects relating to data-driven robust optimization: (i) should data be used to first construct a description of uncertainty, or should we rather use data directly in the optimization domain? (ii) is it possible to endow data-driven designs with robustness certificates that hold without any a-priori knowledge? In the second part of the talk, I will introduce “Scenario Optimization”, a collection of data-driven optimization techniques that can provide algorithmic and theoretical answers to these questions.
Bio:
Marco C. Campi is Professor of Control and Data-Driven Methods at the University of Brescia, Italy. He has held visiting and teaching appointments with Australian, USA, Japanese, Indian, Turkish and various European universities, besides the NASA Langley Research Center in Hampton, Virginia, USA. He has served as Chair of multiple IFAC Technical Committees and has been in diverse capacities on the Editorial Board of top-tier control and optimization journals. In 2008, he received the IEEE CSS George S. Axelby award for the article “The Scenario Approach to Robust Control Design”. He has delivered plenary and semi-plenary addresses at major conferences including OPTIMIZATION, SYSID, and CDC, besides presenting various talks as IEEE distinguished lecturers for the Control Systems Society. Marco C. Campi is a Fellow of IEEE and a Fellow of IFAC. His interests include: data-driven decision-making, stochastic optimization, inductive methods, and the fundamentals of probability theory.
15:00 (CET)
Abstract:
Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CEs can be beneficial to affected individuals, recent work has exposed severe issues related to the robustness of state-of-the-art methods for obtaining CEs. Since a lack of robustness may compromise the validity of CEs, techniques to mitigate this risk are in order. In this talk we will begin by introducing the problem of (lack of) robustness, discuss its implications and present some recent solutions we developed to compute CEs with robustness guarantees.
Bio:
Francesco is a Research Fellow affiliated with the Centre for Explainable Artificial Intelligence at Imperial College London. His research focuses on safe and explainable AI, with special emphasis on counterfactual explanations and their robustness. Since 2022, he leads the project “ConTrust: Robust Contrastive Explanations for Deep Neural Networks”, a four-year effort devoted to the formal study of robustness issues arising in XAI. More details about Francesco and his research can be found at https://fraleo.github.io/.
15:00 (CET)
ESSEC Business School
Exact and Heuristic Methods for Gamma-Robust Mixed-Integer Linear Min-Max Problems
Abstract:
A standard type of uncertainty set in robust optimization is budgeted uncertainty, where an interval of possible values for each parameter is given and the total deviation from their lower bounds is bounded. In the two-stage setting, discrete and continuous budgeted uncertainty have to be distinguished. While two-stage robust selection problems with discrete budgeted uncertainty have been shown to be NP-hard, the complexity of the corresponding problems with continuous budgeted uncertainty has still been open. Only for an alternative version of continuous budgeted uncertainty, where the total absolute deviation of all parameters is bounded instead of the sum of all relative deviations, the two-stage robust selection problem has been shown to be solvable in polynomial time. We close a gap in the knowledge about the complexity of these problems by showing that the two-stage robust representative selection problem with the more common version of continuous budgeted uncertainty is solvable in polynomial time. After applying standard dualization techniques to reformulate the problem as a mixed-integer linear program, we present a combinatorial algorithm to solve the latter. Moreover, we provide new hardness results related to the two-stage robust selection problem with continuous budgeted uncertainty.
This is joint work with Marc Goerigk and Lasse Wulf.
Abstract:
Due to their nested structure, bilevel problems are intrinsically hard to solve—even if all variables are continuous and all parameters of the problem are exactly known. Nevertheless, researchers are increasingly interested in more and more complicated instantiations of bilevel problems to model situations in real-world applications. Additional challenges in bilevel optimization arise if, e.g., (i) mixed-integer aspects are involved or (ii) problems under uncertainty are considered. In this talk, we study mixed-integer linear min-max problems with a binary lower level and a Gamma-robust treatment of lower-level objective uncertainty. We present an exact branch-and-cut approach to solve these Gamma-robust min-max problems, along with cuts tailored to the important class of monotone interdiction problems. Given the overall hardness of the considered problems, which are Σ_2^p-hard in general, we additionally present a heuristic approach. The latter relies on solving a linear number of deterministic min-max problems so that no problem-specific tailoring is required. The performance of our approaches is assessed in an extensive computational study on instances of the Gamma-robust knapsack interdiction problem. Our results show that the heuristic closes the optimality gap for a significant portion of the considered instances and that it often practically outperforms both heuristic and exact benchmark approaches. In particular, we report considerable speed-up factors when compared to our problem-tailored and exact branch-and-cut approach, while also solving more instances to global optimality.
Bio:
Dorothee Henke is a researcher at the Chair of Business Decisions and Data Science, led by Marc Goerigk, at the University of Passau. She studied Mathematics at the University of Bonn and worked at the Chair of Discrete Optimization, led by Christoph Buchheim, at TU Dortmund University. Her main research interests concern bilevel and robust optimization as well as the combination of both concepts, mostly from the perspective of combinatorial optimization and computational complexity theory.
Bio:
Yasmine Beck is a postdoctoral researcher at the Department of Information Systems, Data Analytics and Operations at ESSEC Business School (France) since November 2024. Prior to this, she worked as a research and teaching assistant at Trier University (Germany). Yasmine received her PhD from Trier University in December 2024 with a thesis entitled "Mixed-Integer Optimization Techniques for Robust Bilevel Problems with Here-and-Now Followers", supervised by Martin Schmidt. In 2024, she was also recognized as one of the YoungWomen4OR by the WISDOM forum of the EURO association. Her primary research interests include bilevel optimization, robust optimization, and mixed-integer optimization.
15:00 (CET)
University of Notre Dame
Title: Nonlinear Decision Rules Made Scalable by Nonparametric Liftings
Abstract:
Sequential decision making often requires dynamic policies, which are computationally not tractable in general. Decision rules provide approximate solutions by restricting decisions to simple functions of uncertainties. In this paper, we consider a nonparametric lifting framework where the uncertainty space is lifted to higher dimensions to obtain nonlinear decision rules. We propose two nonparametric liftings, which derive the nonlinear functions by leveraging the uncertainty set structure and problem coefficients. Both methods integrate the benefits from lifting and nonparametric approaches, and hence provide scalable decision rules with performance bounds. More specifically, the set-driven lifting is constructed by finding polyhedrons within uncertainty sets, inducing piecewise-linear decision rules with performance bounds. The dynamics-driven lifting, on the other hand, is constructed by extracting geometric information and accounting for problem coefficients. This is achieved by using linear decision rules of the original problem, also enabling one to quantify lower bounds of objective improvements over linear decision rules. Numerical comparisons with competing methods demonstrate superior computational scalability and comparable performance in objectives. These observations are magnified in multistage problems with extended time horizons, suggesting practical applicability of the proposed nonparametric liftings in large-scale dynamic robust optimization.
Bio:
Eojin Han is an assistant professor of IT, Analytics and Operations in Mendoza College of Business at University of Notre Dame. His research focuses on developing robust optimization methods to address operational challenges arising from uncertainties where accurate predictions are not available, with applications to healthcare, supply chains, and service systems. His work has been published in leading academic journals such as Operations Research and Management Science. Before joining Notre Dame, he served as an assistant professor at Southern Methodist University. Dr. Han holds a Ph.D. in Industrial Engineering and Management Sciences from Northwestern University and a B.S. in Mathematics and Electrical Engineering from Seoul National University.
15:00 (CET)
Technical University of Denmark
Completeness in the Polynomial Hierarchy for many natural Problems in Bilevel and Robust Optimization
Abstract:
In bilevel and robust optimization we are concerned with combinatorial min-max problems, for example from the areas of min-max regret robust optimization, network interdiction, most vital vertex problems, blocker problems, and two-stage adjustable robust optimization.
Even though these areas are well-researched for over two decades and one would naturally expect many (if not most) of the problems occurring in these areas to be complete for the classes \Sigma_2 or \Sigma_3 from the polynomial hierarchy, almost no hardness results in this regime are currently known.
However, such complexity insights are important, since they imply that no polynomial-sized integer program for these min-max problems exist (unless the polynomial hierarchy collapses), and hence conventional IP-based approaches fail.
We address this lack of knowledge by introducing over 70 new \Sigma_2-complete and \Sigma_3-complete problems. I particular, we explore a meta-theorem, which reveals an underlying pattern behind the computational hardness of combinatorial min-max problems. Our meta-theorem applies to many classic combinatorial problems (like TSP, clique, vertex cover, facility location, knapsack, etc...) in conjunction with min-max regret, network interdiction, or two-stage adjustable robust optimization.
Bio:
Lasse Wulf is a postdoctoral researcher at the Technical University of Denmark (DTU). He obtained a double master's degree in mathematics and computer science from Karlsruhe Institute of Technology (KIT) and a PhD from the Technical University of Graz. He is recipient of the best student paper award at IWOCA 2021, and the best dissertation award from the Austrian Society of Operations Research (ÖGOR). His research in robust optimization is centered around the computational complexity of combinatorial min-max optimization, for example in min-max regret, network interdiction, bilevel, and adjustable/recoverable multi-stage robust optimization. In particular, he is interested in cases where such problems are even harder than NP-hard.
15:00 (CET)
Delft University of Technology
Inverse Optimization: The Role of Convexity in Learning
Abstract:
In this talk, we first briefly review a general class of data-driven decision-making and highlight the role of convexity in addressing three research questions that arise in this context. We then focus on a particular setting of such problems known as inverse optimization (IO), where the goal is to replicate the behavior of a decision-maker with an unknown objective function. We discuss recent developments in IO concerning convex training losses and optimization algorithms. The main message of this talk is that IO is a rich supervised learning model that can capture a complex (e.g., discontinuous) behavior while the training phase is still a convex program. We motivate the discussion with applications from control (learning the MPC control law), transportation (the 2021 Amazon Routing Problem Challenge), and robotics (comparison with the state-of-the-art in the MuJoCo environments).
Bio:
Peyman Mohajerin Esfahani is an Associate Professor at the Delft Center for Systems and Control. He joined TU Delft in October 2016, and prior to that, he held several research appointments at EPFL, ETH Zurich, and MIT between 2014 and 2016. He received the BSc and MSc degrees from Sharif University of Technology, Iran, and the PhD degree from ETH Zurich. He currently serves as an associate editor of Mathematical Programming, Operations Research, Transactions on Automatic Control, and Open Journal of Mathematical Optimization. He was one of the three finalists for the Young Researcher Prize in Continuous Optimization awarded by the Mathematical Optimization Society in 2016, and a recipient of the 2016 George S. Axelby Outstanding Paper Award from the IEEE Control Systems Society. He received the ERC Starting Grant and the INFORMS Frederick W. Lanchester Prize in 2020. He is the recipient of the 2022 European Control Award.
15:00 (CET)
RWTH Aachen University
A Note on Piecewise Affine Decision Rules for Robust, Stochastic, and Data-Driven Optimization
Abstract:
Multi-stage decision-making under uncertainty, where decisions are taken under sequentially revealing uncertain problem parameters, is often essential to faithfully model managerial problems. Given the significant computational challenges involved, these problems are typically solved approximately. This short note introduces an algorithmic framework that revisits a popular approximation scheme for multi-stage stochastic programs by Georghiou et al. (2015) and improves upon it to deliver superior policies in the stochastic setting, as well as extend its applicability to robust optimization and a contemporary Wasserstein-based data-driven setting. We demonstrate how the policies of our framework can be computed efficiently, and we present numerical experiments that highlight the benefits of our method.
Abstract:
A Semi-Infinite Programming (SIP) problem is an optimization problem with a finite number of decision variables, and an infinite number of parametrized constraints. Standard SIP problems assume the parameter set is independent of the decision variables, whereas in generalized SIP problems, this set may depend on the decisions. Robust Optimization can be viewed as a special case of SIP (with decision-dependent uncertainty sets in the generalized setting). Despite this connection, the two fields have developed almost independently, with separate methods and applications. This talk bridges these perspectives by focusing on advancements in solving specific classes of SIP problems, which naturally apply to robust optimization scenarios.
We focus on convex standard SIP problems where the separation problem, i.e., the problem of finding the constraint that is the most violated by a given point, is not necessarily convex. Based on the Lagrangian dual of this problem, we derive a convex and tractable restriction of the considered SIP problem. We state sufficient conditions for the optimality of this restriction. If these conditions are not met, the restriction is enlarged through an Inner-Outer Approximation Algorithm, which converges to the value of the original SIP problem. This new algorithmic approach is compared with the classical Cutting Plane algorithm, for which a new rate of convergence, directly related to the iteration index, is proposed under specific assumptions.
Bio:
Simon Thomä is a Ph.D. student at the Professorship of Business Analytics and Intelligent Systems at TUM School of Management and the Chair of Operations Management at RWTH Aachen. His research focuses on the development of robust and adjustable optimization techniques with applications in network and supply chain design. He earned a Bachelor and Master of Science in Mathematics from the University of Bonn and a Master of Science in Business Analytics from Imperial College London.
Bio:
Martina Cerulli is a Researcher at the Department of Computer Science at the University of Salerno. She obtained her Master’s Degree in Industrial Engineering from La Sapienza University of Rome in July 2018, and, in December 2021, she defended her PhD (funded by the Marie Curie scholarship “MINOA”) at École Polytechnique de Paris, with a thesis on “Bilevel Optimization and Applications,” supervised by Prof. Claudia D’Ambrosio and Prof. Leo Liberti. After her PhD, Martina worked as a postdoctoral researcher in the ESSEC Business School, collaborating with Professors Ivana Ljubic and Claudia Archetti. In 2022 she was named one of the YoungWomen4OR by the WISDOM forum of the EURO association. Her primary research interests include bilevel optimization, mixed-integer programming, and optimization on graphs, with applications in areas such as aircraft conflict resolution, optimal power flow, social network analysis and last-mile delivery.
15:00 (CET)
Vrije Universiteit Amsterdam
Exact and Approximate Schemes for Robust Optimization Problems with Decision-Dependent Information Discovery
Abstract:
In uncertain optimization problems with decision-dependent information discovery, the decision-maker can influence when information is revealed, unlike the classic setting where uncertain parameters are revealed according to a prescribed filtration. This work examines two-stage robust optimization problems with decision-dependent information discovery, focusing on uncertainty in the objective function. We introduce the first exact algorithm for this class of problems and enhance the existing K-adaptability approximation by strengthening its formulation. Our approaches are tested on robust orienteering and shortest-path problems. We study the interplay between information and adaptability, demonstrating that the exact solution method often outperforms the K-adaptability approximation. Moreover, our experiments highlight the benefits of the strengthened K-adaptability formulation over the classical one, even in settings with decision-independent information discovery.
Bio:
Rosario Paradiso is an Assistant Professor in the Department of Operations Analytics at Vrije Universiteit Amsterdam in the Netherlands. He earned his PhD from the Department of Mathematics and Computer Science at the University of Calabria in Italy in 2020. Rosario's research primarily focuses on exact algorithms for combinatorial optimization problems, both deterministic and under uncertainty, with applications in supply chain management.
15:00 (CET)
Abstract:
Problem uncertainty typically limits how well decisions can be tailored to the problem at hand but often can not be avoided. The availability of large quantities of data in modern applications however presents an exciting opportunity to nevertheless make better informed decisions. Distributional robust optimization (DRO) has recently gained prominence as a paradigm for making data-driven decisions which are protected against adverse overfitting effects. We justify the effectiveness of this paradigm by pointing out that certain DRO formulations indeed are statistically efficient. Furthermore, DRO formulations can also be tailored to protect decisions against overfitting even when working with messy corrupted data. Finally, we discuss a hitherto hidden relation between DRO and classical robust statistics.
Bio:
Bart obtained his PhD in control theory at ETH Zurich in 2016 under the supervision of Prof. Manfred Morari. He was a SNSF postdoctoral researcher at the Operations Research Center at MIT from 2018 to 2020 and Assistant Professor at the MIT Sloan School of Management from 2018 to 2024. As of 2024 Bart is a tenured senior researcher in the Stochastics Group at CWI Amsterdam. Bart's research is located on the interface between optimization and machine learning. He particularly enjoy developing novel mathematical methodologies and algorithms with which we can turn data into better decisions. Although most of his research is methodological, he does enjoy applications related to problems in renewable energy.
15:00 (CET)
University of Edinburg
Two-stage and Lagrangian Dual Decision Rules for Multistage Adaptive Robust Optimization
Abstract:
We design primal and dual bounding methods for general multistage adaptive robust optimization (MSARO) problems. From the primal perspective, this is achieved by applying decision rules that restrict the functional forms of only a certain subset of decision variables resulting in an approximation of MSARO as a two-stage adjustable robust optimization (2ARO) problem. We leverage the 2ARO literature in the solution of this approximation. From the dual perspective, decision rules are applied to the Lagrangian multipliers of a Lagrangian dual of MSARO, resulting in a two-stage stochastic optimization problem. As the quality of the resulting dual bound depends on the distribution chosen when developing the dual formulation, we define a distribution optimization problem with the aim of optimizing the obtained bound, and we develop solution methods tailored to the nature of the recourse variables. Computational experiments illustrate the potential value of our bounding techniques compared to the existing methods.
(This is a joint work with Maryam Daryalal and Ayse N. Arslan.)
Bio:
Merve Bodur is an Associate Professor in the School of Mathematics at the University of Edinburgh. She obtained her Ph.D. from the University of Wisconsin-Madison, her B.S. in Industrial Engineering and her B.A. in Mathematics from Bogazici University, Turkey. Her main research area is optimization under uncertainty, primarily for discrete optimization problems, with applications in a variety of areas such as scheduling, transportation, healthcare, telecommunications, and power systems. She serves on the editorial boards of Operations Research Letters, Omega, and INFOR. She is currently the Vice Chair/Chair-Elect of the INFORMS Computing Society, serves on the Committee on Stochastic Programming, and is a former Vice Chair of the INFORMS Optimization Society.
15:00 (CET)
Abstract:
In the shortest path problem, we are given a directed graph G with nonnegative arc costs, and we seek an s-t path in G of the smallest total cost. This fundamental combinatorial problem can be solved efficiently in general digraphs. In some applications, a path computed in the first stage can be modified to some extent after the costs in G have been changed. Therefore, knowing the new cost scenario, we compute another (second-stage) path in some prescribed neighborhood of the first-stage path, so that the total cost of both paths is minimal. Typically, the second-stage arc costs are uncertain while the first-stage path is computed, and we seek a solution that minimizes the total cost under the worst second-stage cost realization. We use interval uncertainty sets to model the uncertain second-stage arc costs, which is very common in robust optimization. The recoverable robust shortest path problem under the interval uncertainty has been extensively studied since the work of Busing (2011, 2012). In this talk, some new results in this area are presented. We will characterize the complexity of the problem, depending on the class of the input digraph (general, acyclic, arc series-parallel), type of the neighborhood (arc inclusion, arc exclusion, arc symmetric difference) and structure of the interval uncertainty set (discrete budgeted, continuous budgeted). We will also state some open problems in this area.
Bio:
Adam Kasperski is a Professor at the Wroclaw University of Science and Technology. His scientific interests focus on robust discrete optimization problems. His work aims to better understand the computational complexity of different approaches in this area. He is currently a head of research project entitled "Decision making under risk and uncertainty - concepts, methods, and applications", at the Wroclaw University of Science and Technology.
15:00 (CET)
Delft University of Technology
Robust Optimal Classification Trees
Friedrich-Alexander-Universität Erlangen-Nürnberg
Towards Robust Interpretable Surrogates for Optimization
Abstract:
Decision trees are a popular choice of interpretable model, but just like neural networks, they suffer from adversarial examples: adversarial perturbations that result in misclassifications. Existing algorithms for fitting decision trees that are robust against adversarial examples are greedy heuristics and therefore lack approximation guarantees. In this work, we propose Robust Optimal Classification Trees (ROCT), a collection of methods to train decision trees that are optimally robust against user-specified attack models. We show that the min-max optimization problem that arises in adversarial learning can be solved using a single minimization formulation for decision trees with 0-1 loss. We propose such formulations in Mixed-Integer Linear Programming and Maximum Satisfiability, which widely available solvers can optimize. We also present a method that determines the upper bound on adversarial accuracy for any model using bipartite matching.
Abstract:
An important factor in the practical implementation of optimization models is the acceptance by the intended users. This is influenced, among other factors, by the interpretability of the solution process. At the same time, uncertainty in the parameters of optimization problems can complicate decision-making, a challenge often addressed through robust optimization techniques. Our work aims to integrate interpretability and robustness by developing optimization rules, in the form of decision trees, that are both resilient to parameter perturbations and inherently interpretable. We present a model and corresponding solution methods for this purpose, along with an evaluation of heuristic approaches.
Bio:
Daniël Vos is a PostDoc at the TU Delft algorithmics group. His work aims at making machine learning models more trustworthy by combining techniques from machine learning, robust optimization, and cyber security to improve interpretability, robustness, and privacy.
Bio:
Michael Hartisch currently serves as a temporary professor for Analytics & Mixed-Integer Optimization at Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany. Prior to this role, he was a temporary professor for Network and Data Science Management at the University of Siegen, Germany. His academic journey began with studies in mathematics at Friedrich-Schiller-University Jena, Germany, and he received his PhD at the University of Siegen. Michael’s research primarily focuses on optimization under uncertainty, as well as explainability and interpretability in optimization.
15:00 (CET)
George Mason University
New Prescriptive Analytics and Data-Driven Optimization Paradigms
Abstract:
We present prescriptive analytics paradigms for data-driven decision-making for complex systems and environments. Conventional data-driven decision-making methods assume access to plentiful, well-structured data to generate a prediction model, and subsequently, an optimization model is used to create a decision. However, real data can be incomplete (sparse, missing), scarce, partially observable, and time-varying. We present new learning and optimization frameworks to tackle emerging challenges caused by the complexity and limitations of data and difficulties inherent in data-system integration. Our approaches are based on amalgams of machine learning, distributionally robust/stochastic optimization, and combinatorial optimization.
Bio:
Hoda Bidkhori is an Assistant Professor at George Mason University, where her research focuses on the theory and applications of data analytics and data-driven optimization, with applications in logistics, supply chain management, and healthcare. Previously, she served as an Assistant Professor at the University of Pittsburgh. Prior to that, she was a postdoctoral researcher in Operations Research at MIT, where she also earned her Ph.D. in Applied Mathematics.
15:00 (CET)
Abstract:
We study decision problems under uncertainty, where the decision-maker has access to K data sources that carry biased information about the underlying risk factors. The biases are measured by the mismatch between the risk factor distribution and the K data-generating distributions with respect to an optimal transport (OT) distance. In this situation the decision-maker can exploit the information contained in the biased samples by solving a distributionally robust optimization (DRO) problem, where the ambiguity set is defined as the intersection of K OT neighborhoods, each of which is centered at the empirical distribution on the samples generated by a biased data source. We show that if the decision-maker has a prior belief about the biases, then the out-of-sample performance of the DRO solution can improve with K—irrespective of the magnitude of the biases. We also show that, under standard convexity assumptions, the proposed DRO problem is computationally tractable if either K or the dimension of the risk factors is kept constant.
Bio:
Daniel Kuhn is Professor of Operations Research at the College of Management of Technology at EPFL, where he holds the Chair of Risk Analytics and Optimization (RAO). His current research interests are focused on data-driven optimization, the development of efficient computational methods for the solution of stochastic and robust optimization problems and the design of approximation schemes that ensure their computational tractability. His work is primarily application-driven, the main application areas being engineered systems, machine learning, business analytics and finance. Before joining EPFL, Daniel Kuhn was a faculty member in the Department of Computing at Imperial College London (2007-2013) and a postdoctoral research associate in the Department of Management Science and Engineering at Stanford University (2005-2006). He is the editor-in-chief of Mathematical Programming.
15:00 (CET)
Technion Israel Institute of Technology
Smooth Uncertainty Sets: Connecting Uncertain Parameters via a Simple Polyhedral Set
Abstract:
In this work we propose a novel polyhedral uncertainty set for robust optimization: the smooth uncertainty set. This set imposes bounds on the difference of pairs of uncertain parameters. Smooth uncertainty sets arise naturally in applications where there is a strong connection between the generation of these parameters due to vicinity in space/time, or due to physical constraints. Thus, for such applications, smooth uncertainty sets can be constructed based on expert estimation or existing data.
Under probabilistic assumptions on the uncertain parameters' correlations, we derive a construction of the uncertainty set that provides probabilistic guarantees for the resulting solution of the robust problem. We show that the size of this set decreases as the correlation between the parameters increases.
In terms of tractability, for specially structured problems, we prove the robust counterpart can be reformulated in a compact way, with no additional variables and few additional constraints. For a more general problem structure, we construct a column generation algorithm tailored for the smooth uncertainty set for solving the robust problem.
Joint work with Michael Poss (CNRS, LIRMM) and Noam Goldberg (Bar-Ilan University)
Bio:
Shimrit Shtern is a senior lecturer in the Faculty of Data and Decision Sciences at the Technion - Israel Institute of Technology. She received a Ph.D. in Optimization and an MSc in Operations Research, both from the Technion, and was a post-doctoral fellow at the Operation Research Center at MIT. Her research focuses on theory and applications of optimization under uncertainty as well as development and convergence analysis of first order methods for structured optimization problems. She is a recipient of research grants from both the Israeli Science Foundation (ISF) and The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation).
15:00 (CET)
Abstract:
Multi-stage mixed-integer robust optimization, in which decisions are taken sequentially as new information becomes available about the uncertain problem parameters, is a very versatile yet computationally challenging paradigm for decision making under uncertainty. In this talk, I will propose solution methods for two-stage and multi-stage versions of such problems. For the first, we will construct piecewise constant decision rules by adaptively partitioning the uncertainty set. The partitions of this set are iteratively updated by separating so-called critical scenarios. We discuss how to identify such critical scenarios based on the branch-and-bound procedure for solving the current corresponding static problem. For multi-stage robust mixed-integer programs, we will consider a finite adaptability scheme, which allows us to decompose the multi-stage problem into a large number of much simpler two-stage problems. We discuss how these two-stage problems can be solved both exactly and approximately, and we report numerical results for route planning and location-transportation problems.
Bio:
Ward Romeijnders is a full professor within the Department of Operations at the University of Groningen. His research is focused on developing exact and approximate solution methods for integer optimization problems under uncertainty. This class of problems can be used to support decision making under uncertainty for a wide range of applications in, e.g., energy, healthcare, logistics, and finance. Ward publishes his research in journals such as Operations Research, Mathematical Programming, SIAM Journal on Optimization, European Journal of Operational Research, and INFORMS Journal on Computing, and he is Associate Editor of Mathematical Methods of Operations Research. He has received several research grants from the Netherlands Organisation for Scientific Research, the latest one for a project called "Discrete Decision Making under Uncertainty".
15:00 (CET)
Abstract:
We present two machine learning methods to learn uncertainty sets in decision-making problems affected by uncertain data. In the first part, we introduce mean robust optimization (MRO), a framework that constructs data-driven uncertainty sets based on clustered data rather than on observed data points directly. By varying the number of clusters, MRO balances computational complexity and conservatism, and effectively bridges robust and Wasserstein distributionally robust optimization. In the second part, we present LRO, a method to automatically learn the shape and size of the uncertainty sets in robust optimization. LRO optimizes the performance across a family of parametric problems, while guaranteeing a target probability of constraint satisfaction. It relies on a stochastic augmented Lagrangian method that differentiates the solutions of robust optimization problems with respect to the parameters of the uncertainty set. Our approach is very flexible, and it can learn a wide variety of uncertainty sets commonly used in practice. We illustrate the benefits of our two machine learning methods on several numerical examples, obtaining significant reductions in computational complexity and conservatism, while preserving out-of-sample constraint satisfaction guarantees.
Bio:
Bartolomeo Stellato is an Assistant Professor in the Department of Operations Research and Financial Engineering at Princeton University. Previously, he was a Postdoctoral Associate at the MIT Sloan School of Management and Operations Research Center. He received a DPhil (PhD) in Engineering Science from the University of Oxford, a MSc in Robotics, Systems and Control from ETH Zurich, and a BSc in Automation Engineering from Politecnico di Milano. He is the developer of OSQP, a widely used solver in mathematical optimization. Bartolomeo Stellato's awards include the NSF CAREER Award, the Princeton SEAS Howard B. Wentz Jr. Faculty Award, the Franco Strazzabosco Young Investigator Award from ISSNAF, the Princeton SEAS Innovation Award in Data Science, the Best Paper Award in Mathematical Programming Computation, and the First Place Prize Paper Award in IEEE Transactions on Power Electronics. His research focuses on data-driven computational tools for mathematical optimization, machine learning, and optimal control.
15:00 (CET)
Trier University
Adjustable Robust Nonlinear Network Design under Demand Uncertainties
Abstract:
We study network design problems for nonlinear and nonconvex flow models under demand uncertainties. To this end, we apply the concept of adjustable robust optimization to compute a network design that admits a feasible transport for all, possibly infinitely many, demand scenarios within a given uncertainty set. For solving the corresponding adjustable robust mixed-integer nonlinear optimization problem, we show that a given network design is robust feasible, i.e., it admits a feasible transport for all demand uncertainties, if and only if a finite number of worst-case demand scenarios can be routed through the network. We compute these worst-case scenarios by solving polynomially many nonlinear optimization problems. Embedding this result for robust feasibility in an adversarial approach leads to an exact algorithm that computes an optimal robust network design in a finite number of iterations. Since all of the results are valid for general potential-based flows, the approach can be applied to different utility networks such as gas, hydrogen, or water networks. We finally demonstrate the applicability of the method by computing robust gas networks that are protected from future demand fluctuations.
Bio:
Johannes Thürauf is a postdoctoral researcher at Trier University since 11/2021. He studied Mathematics at the Friedrich-Alexander Universität (FAU) Erlangen-Nürnberg and received his M.Sc. degree in 2017. Afterward, he worked as a PhD student at the FAU Erlangen-Nürnberg and got his PhD at the end of 2021. His main research interests include robust and bilevel optimization (under uncertainty), for example with applications to networks. In particular, he is interested in the interplay of robust and bilevel optimization.
15:00 (CET)
Abstract:
We address a continuous-time continuous-space stochastic optimal control problem where the controller lacks exact knowledge of the underlying diffusion process. Instead, the controller relies on a finite set of historical disturbance trajectories. To tackle this challenge, we propose a novel approach called Distributionally Robust Path Integral control (DRPI). This method utilizes distributionally robust optimization (DRO) to make the resulting policy robust against the unknown diffusion process. Notably, the DRPI scheme bears resemblance to risk-sensitive control, allowing us to leverage the path integral control (PIC) framework as an efficient solution method. We establish theoretical performance guarantees for the DRPI scheme, which facilitates the determination of an appropriate risk parameter for the risk-sensitive control. Through extensive validation, we demonstrate the effectiveness of our approach and its superiority over risk-neutral and risk-averse PIC policies in scenarios where the true diffusion process is unknown.
Grani A. Hanasusanto is an Associate Professor of Industrial & Enterprise Systems Engineering at the University of Illinois Urbana-Champaign (UIUC). Before joining UIUC, he was an Assistant Professor at The University of Texas at Austin and a Postdoctoral Scholar at the College of Management of Technology at École Polytechnique Fédérale de Lausanne. He holds a Ph.D. in Operations Research from Imperial College London and an MSc in Financial Engineering from the National University of Singapore. His research focuses on developing tractable solution schemes for decision-making problems under uncertainty, with applications spanning operations management, energy systems, finance, data analytics, and machine learning. He is a recipient of the NSF CAREER award and the INFORMS Diversity, Equity, and Inclusion (DEI) Ambassadors Program award. He currently serves as a committee member of the INFORMS Diversity Community and an Associate Editor for Operations Research.
15:00 (CET)
Université de Bordeaux
Uncertainty reduction in (static) robust optimization
Abstract:
In this talk, we focus on static robust optimization problems where the uncertainty set can be controlled through the actions of the decision maker known as decision-dependent uncertainty. Particularly, we consider an uncertainty reduction paradigm where the decision-maker can reduce upper bounds of uncertain parameters as a result of some proactive actions. This paradigm was recently proposed in the literature and the resulting problems were shown to be NP-Hard. In this talk, we pay particular attention to the special case of robust combinatorial optimization problems, and show that they are also NP-Hard under the uncertainty reduction paradigm. Despite this discouraging result, we show that under some additional assumptions polynomial-time algorithms can be devised to solve robust combinatorial optimization problems with uncertainty reduction. We additionally provide insights into possible MILP reformulations in the general case and illustrate the practical relevance of our results on the shortest path instances from the literature.
Ayse N. Arslan is a junior researcher at Inria Center of Bordeaux. Her research interests lie at the intersection of mixed-integer programming and optimization under uncertainty with a particular focus on stochastic and robust optimization and their applications. Some of her recent work includes decomposition algorithms for solving two-stage robust optimization problems and primal and dual decision rules for approximating multi-stage robust optimization problems.
15:00 (CET)
Lehigh University
An inexact column-and-constraint generation method to solve two-stage robust optimization problems
Abstract:
In this talk, we will present our new inexact column-and-constraint generation (i-C&CG) method for solving challenging two-stage robust and distributionally robust optimization problems. In our i-C&CG method, the master problems only need to be solved to a prescribed relative optimality gap or time limit, which may be the only tractable option when solving large-scale and/or challenging problems in practice. We equip i-C&CG with a backtracking routine that controls the trade-off between bound improvement and inexactness. Importantly, this routine allows us to derive theoretical finite convergence guarantees for our i-C&CG method, demonstrating that it converges to the exact optimal solution under some parameter settings. Numerical experiments on a scheduling problem and a facility location problem demonstrate the computational advantages of our i-C&CG method over a state-of-the-art C&CG method.
Dr. Karmel S. Shehadeh is an Assistant Professor in the Department of Industrial and Systems Engineering at Lehigh University. Before joining Lehigh, she was a presidential and dean postdoctoral fellow at Heinz College of Information Systems and Public Policy at Carnegie Mellon University. She holds a doctoral degree in Industrial and Operations Engineering from the University of Michigan, a master's degree in Systems Science and Industrial Engineering from Binghamton University, and a bachelor's in Biomedical Engineering from Jordan University of Science and Technology. Her research interests revolve around stochastic and robust optimization with applications to healthcare operations and analytics, facility location, transportation systems, and fair decision-making. Professional recognition of her work includes an INFORMS Minority Issues Forum Paper Award (winner), Junior Faculty Interest Group Paper Prize (finalist), Service Science Best Cluster Paper Award (finalist), and Lehigh's Alfred Noble Robinson Faculty Award.
15:00 (CET)
Abstract:
The abundance of data has led to the emergence of a variety of optimization techniques that attempt to leverage available side information to provide more anticipative decisions. The wide range of methods and contexts of application have motivated the design of a universal unitless measure of performance known as the coefficient of prescriptiveness. This coefficient was designed to quantify both the quality of contextual decisions compared to a reference one and the prescriptive power of side information. To identify policies that maximize the former in a data-driven context, this talk introduces a distributionally robust contextual optimization model where the coefficient of prescriptiveness substitutes for the classical empirical risk minimization objective. We present a bisection algorithm to solve this model, which relies on solving a series of linear programs when the distributional ambiguity set has an appropriate nested form and polyhedral structure. Studying a contextual shortest path problem, we evaluate the robustness of the resulting policies against alternative methods when the out-of-sample dataset is subject to varying amounts of distribution shift.
Abstract:
The field of Contextual Optimization (CO) integrates machine learning and optimization to solve decision making problems under uncertainty. Recently, a risk sensitive variant of CO, known as Conditional Robust Optimization (CRO), combines uncertainty quantification with robust optimization to promote safety and reliability in high stake applications. Exploiting modern differentiable optimization methods, we propose a novel end-to-end approach to train a CRO model in a way that accounts for both the empirical risk of the prescribed decisions and the quality of conditional coverage of the contextual uncertainty set that supports them. While guarantees of success for the latter objective are impossible to obtain from the point of view of conformal prediction theory, high quality conditional coverage is achieved empirically by ingeniously employing a logistic regression differentiable layer within the calculation of coverage quality in our training loss. We show that the proposed training algorithms produce decisions that outperform the traditional estimate then optimize approaches.
Bio:
Mehran Poursoltani is a Postdoctoral Fellow at the Desautels Faculty of Management and the Bensadoun School of Retail Management at McGill University. He earned his PhD in Management Science from HEC Montreal, where he was supervised by Erick Delage and Angelos Georghiou. His research interests include robust optimization, regret minimization, contextual optimization, decision analysis, and risk management.
Bio:
Abhilash Chenreddy is a PhD candidate at the Department of Decision Sciences, HEC Montreal supervised by Prof. Erick Delage. His research interests are at the intersection of machine learning and Robust Optimization, with a focus on designing uncertainty sets adaptable to contextual information.
15:00 (CET)
RWTH Aachen University
Solving Robust Combinatorial Optimization Problems under Budget Uncertainty faster by Recycling Valid Inequalities or Tree Search
Abstract:
In combinatorial optimization problems under budget uncertainty the solution space is deterministic, but the objective is uncertain. By the seminal work of Bertsimas & Sim (2003, 2004), such problems can be solved in various ways: by separating constraints bounding the objective, by a compact reformulation, or by solving n+1 deterministic problems of the same type. Although the complexity of the problem does not increase in theory, in many cases the performance deteriorates significantly.
In this talk, we show two ways to enhance the computation times of the compact reformulation: (i) recycling valid inequalities known for the deterministic problem yielding surprisingly strong valid inequalities for the robust problem and (ii) a tailored branch and bound algorithm performing a tree search on one of the dual variables. This is joint work with Timo Gersing and Christina Büsing.
Arie M.C.A. Koster is since 2009 a Professor of Mathematics at RWTH Aachen University. Before, he received his PhD from Maastricht University and worked at Zuse Institute Berlin and University of Warwick. His research interests include integer linear programming, polyhedral combinatorics, robust optimization, algorithmic graph theory, and network optimization. He has published over 100 research papers and is on the editorial board of Annals of Operations Research, INFORMS Journal on Computing, and Networks.
15:00 (CET)
Eindhoven University of Technology
Title: Optimal robust inventory management with volume flexibility: Matching capacity and demand with the lookahead peak-shaving policy
Abstract:
We study inventory control with volume flexibility: A firm can replenish using period-dependent base capacity at regular sourcing costs and access additional supply at a premium. The optimal replenishment policy is characterized by two period-dependent base-stock levels but determining their values is not trivial, especially for nonstationary and correlated demand. We propose the Lookahead Peak-Shaving policy that anticipates and peak shaves orders from future peak-demand periods to the current period, thereby matching capacity and demand. Peak shaving anticipates future order peaks and partially shifts them forward. This contrasts with conventional smoothing, which recovers the inventory deficit resulting from demand peaks by increasing later orders. Our contribution is threefold. First, we use a novel iterative approach to prove the robust optimality of the Lookahead Peak-Shaving policy. Second, we provide explicit expressions of the period-dependent base-stock levels and analyze the amount of peak shaving. Finally, we demonstrate how our policy outperforms other heuristics in stochastic systems. Most cost savings occur when demand is nonstationary and negatively correlated, and base capacities fluctuate around the mean demand. Our insights apply to several practical settings, including production systems with overtime, sourcing from multiple capacitated suppliers, or transportation planning with a spot market. Applying our model to data from a manufacturer reduces inventory and sourcing costs by 6.7%, compared to the manufacturer's policy without peak shaving.
This is a joint work with Joren Gijsbrechts, Robert N. Boute, Jan A. Van Mieghem.
Christina Imdahl is Assistant Professor of Machine Learning in Operations Management at the OPAC group of Eindhoven University of Technology (TU/e). Christina's research is focuses on decision-making in operations management. She develops models that are motivated by operational practice and explores how we can leverage data to make better decisions. In her research, she combines the use of stochastic and robust optimization with empirical analyses to gain insights into performance.
15:00 (CET)
University of Illinois at Chicago
Title: Improving the Security of United States Elections with Robust Optimization
Abstract:
For more than a century, election officials across the United States have inspected voting machines before elections using a procedure called Logic and Accuracy Testing (LAT). This procedure consists of election officials casting a test deck of ballots into each voting machine and confirming the machine produces the expected vote total for each candidate.
In this talk, I will bring a scientific perspective to LAT by introducing the first formal approach to designing test decks with rigorous security guarantees. Specifically, we propose using robust optimization to find test decks that are guaranteed to detect any voting machine misconfiguration that would cause votes to be swapped across candidates. Out of all the test decks with this security guarantee, the robust optimization problem yields the test deck with the minimum number of ballots, thereby minimizing implementation costs for election officials. To facilitate deployment at scale, we developed a practical exact algorithm for solving our robust optimization problems based on mixed-integer optimization and the cutting plane method.
In partnership with the Michigan Bureau of Elections, we retrospectively applied our robust optimization approach to all 6928 ballot styles from Michigan's November 2022 general election; this retrospective study reveals that the test decks with rigorous security guarantees obtained by our approach require, on average, only 1.2% more ballots than current practice. Our robust optimization approach has since been piloted in real-world elections by the Michigan Bureau of Elections as a low-cost way to improve election security and increase public trust in democratic institutions.
This is joint work with Alex Halderman (Michigan) and Braden Crimmins (Stanford and Michigan), and our preprint is available at https://arxiv.org/pdf/2308.02306.pdf.
Brad Sturt an assistant professor of business analytics at the University of Illinois Chicago. His current research focuses on the application of robust optimization to revenue management and public policy. He is also broadly interested in using robust optimization to design tractable approaches for dynamic optimization problems. His research has received several recognitions, including second place in the INFORMS Junior Faculty Interest Group (JFIG) Paper Competition and second place in the INFORMS George Nicholson Student Paper Competition.
15:00 (CET)
TU Delft
Title: Machine Learning for K-adaptability in Two-stage Robust Optimization
University of Toronto
Title: Neur2RO: Neural Two-Stage Robust Optimization
Abstract:
Two-stage robust optimization problems constitute one of the hardest optimization problem classes. One of the solution approaches to this class of problems is K-adaptability. This approach simultaneously seeks the best partitioning of the uncertainty set of scenarios into K subsets and optimizes decisions corresponding to each of these subsets. In the general case, it is solved using the K-adaptability branch-and-bound algorithm, which requires the exploration of exponentially growing solution trees. To accelerate finding high-quality solutions in such trees, we propose a machine learning-based node selection strategy. In particular, we construct a feature engineering scheme based on general two-stage robust optimization insights that allows us to train our machine learning tool on a database of resolved B&B trees, and to apply it as-is to problems of different sizes and/or types. We experimentally show that using our learned node selection strategy outperforms a vanilla, random node selection strategy when tested on problems of the same type as the training problems, also in case the K-value or the problem size differs from the training ones. This work is in collaboration with Krzysztof Postek (BCG) and Ilker Birbil (UvA).
Abstract:
Robust optimization provides a mathematical framework for modeling and solving decision-making problems under worst-case uncertainty. This work addresses two-stage robust optimization (2RO) problems (also called adjustable robust optimization), wherein first-stage and second-stage decisions are made before and after uncertainty is realized, respectively. This results in a nested min-max-min optimization problem which is extremely challenging computationally, especially when the decisions are discrete. We propose Neur2RO, an efficient machine learning-driven instantiation of column-and-constraint generation (CCG), a classical iterative algorithm for 2RO. Specifically, we learn to estimate the value function of the second-stage problem via a novel neural network architecture that is easy to optimize over by design. Embedding our neural network into CCG yields high-quality solutions quickly as evidenced by experiments on two 2RO benchmarks, knapsack and capital budgeting. For knapsack, Neur2RO finds solutions that are within roughly of the best-known values in a few seconds compared to the three hours of the state-of-the-art exact branch-and-price algorithm; for larger and more complex instances, Neur2RO finds even better solutions. For capital budgeting, Neur2RO outperforms three variants of the -adaptability algorithm, particularly on the largest instances, with a 5 to 10-fold reduction in solution time.
Our code and data are available at https://github.com/khalil-research/Neur2RO.
Bio:
Esther is a PhD student at TU Delft. Her interests lie in embedding machine learning techniques into algorithm design to obtain faster/better results, currently for projects in robust optimization and phylogenetics.
Bio:
Justin is a PhD candidate at the University of Toronto. His research interests are at the intersection of machine learning and operations research, with a current focus on developing learning-based algorithms for discrete optimization, stochastic programming, and robust optimization.
15:00 (CET)
Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM)
Title: Combinatorial Robust Optimization with Decision-Dependent Information Discovery and Polyhedral Uncertainty
Given a nominal combinatorial optimization problem, we
consider a robust two-stages variant with polyhedral cost uncertainty,
called DDID. In the first stage, DDID selects a subset of uncertain cost
coefficients to be observed, and in the second-stage, DDID selects a
solution to the nominal problem, where the remaining cost coefficients
are still uncertain. Given a compact linear programming formulation for
the nominal problem, we provide a mixed-integer linear programming
(MILP) formulation for DDID. The MILP is compact if the number of
constraints describing the uncertainty polytope other than lower and
upper bounds is constant. In that case, the formulation leads to
polynomial-time algorithms for DDID when the number of possible
observations is polynomially bounded. We extend this formulation to more
general nominal problems through column generation and constraint
generation algorithms. We illustrate our reformulations and algorithms
numerically on the selection problem, the orienteering problem, and the
spanning tree problem.
Link to paper: https://hal.science/hal-04097679/
Michael Poss is a senior research fellow at the CNRS, in the LIRMM laboratory. His current research focuses mainly on robust combinatorial optimization. He his the founding managing editor of the Open Journal of Mathematical Optimization (https://ojmo.centre-mersenne.org/).
15:00 (CET)
University of Amsterdam
Title: An exact method for a class of robust nonlinear optimization problems.
Trier University
Title: Using Column Generation in Column-and-Constraint Generation for Adjustable Robust Optimization
Abstract:
We propose a novel exact approach to solve optimization problems with robust nonlinear constraints that are neither convex nor concave but sums of products of linear times concave functions with respect to the uncertain parameters. The proposed approach combines a cutting set method with reformulation-perspectification techniques and branch and bound. We show that our approach also applies to robust convex optimization, which is a sub-class of problems involving sum of linear times concave functions in the uncertain parameters, thereby extending the existing literature. Numerical experiments on a robust convex geometric optimization problem, a robust quadratic optimization problem with data uncertainty and implementation error, and a lot sizing problem in which we impose a quadratic decision rule, demonstrate the effectiveness of the proposed approach.
Abstract:
Column-and-constraint-generation (CCG) algorithms are among the most popular approaches for solving adjustable robust problems. In the recent years, they have been successfully applied to a variety of problems, most of which have continuous wait-and-see decisions. While it is typically acknowledged that CCG can scale relatively well for problems with continuous wait-and-see decisions, the same cannot be claimed for problems in which (mixed-)integer decisions must be taken after the uncertainty is revealed. In this talk, we computationally evaluate the performance of CCG algorithms when applied to problems with mixed-integer second-stage decisions and discuss its main weaknesses. Then, we show how column-generation techniques can be employed to alleviate these burdens and successfully reduce the computational time needed to prove optimality. Additionally, we also introduce a new heuristic method, which is based on the column-generation procedure to quickly find feasible solutions at each step of the CCG. The scheme can also be enhanced with domain-specific knowledge on the deterministic version of the problem being solved and can exploit parallelism. Finally, we also highlight that the method can, at each iteration, be warm-started by knowledge gained at previous iterations of the CCG. This is in contrast to standard CCG algorithms where each iteration solves a new problem from scratch. The computational benefits of the proposed scheme is then evaluated on two optimization problems arising in logistics and scheduling.
Bio:
Danique is a PhD-candidate at the University of Amsterdam. Her research focuses on finding new methods to globally solve hard minimization problems with concave parts in the objective and/or constraint functions. These methods can also be used to solve robust nonlinear optimization problems exactly.
Bio:
Henri is currently a post-doc researcher at Universität Trier (Germany). He received a Master degree in "Optimization and learning of systems" from the Université de Technologie de Compiègne (France) in 2019 as well as an Engineering degree in "Production systems" from the Università degli studi di Genova (Italy) the same year. He earned his Ph.D. at the Università di Bologna (Italy) under the supervision of Michele Monaci and Enrico Malaguti where he studied adjustable robust optimization problems with nonlinear recourses. His main research interests are in optimization under uncertainty and decomposition methods for large-scale applications. In particular, his work focuses on exact methods for two-stage robust problems with mixed-integer convex wait-and-see decisions using decomposition approaches.
15:00 (CET)
Erasmus University Rotterdam
Title: Two-Stage Robust Quadratic Optimization with Equalities and its Application to Optimal Power Flow
In this work, we consider two-stage quadratic optimization problems under ellipsoidal uncertainty. In the first stage, one needs to decide upon the values of a subset of optimization variables (control variables). In the second stage, the uncertainty is revealed, and the rest of optimization variables (state variables) are set up as a solution to a known system of possibly non-linear equations. This type of problem occurs, for instance, in optimization for dynamical systems, such as electric power systems as well as gas and water networks. We propose a convergent iterative algorithm to build a sequence of approximately robustly feasible solutions. At each iteration, the algorithm optimizes over a subset of the feasible set and uses affine approximations of the second-stage equations while preserving the non-linearity of other constraints. We implement our approach for AC Optimal Power Flow and demonstrate the performance of our proposed method on Matpower instances. This work focuses on quadratic problems, but the approach is suitable for more general setups.
This topic is based on the following working paper: https://arxiv.org/abs/2104.03107. This is joint work with Bissan Ghaddar from Ivery Business School and Daniel Molzahn from Georgia Tech.
Olga is an assistant professor of optimization at the Econometric Institute of Erasmus University Rotterdam. She develops solution approaches and approximation algorithms for non-linear optimization problems using convex programming. Typical applications for her research are network-based problems, such as optimization of energy, water, and transport systems.
15:00 (CET)
Robust optimization has become an indispensable tool in tackling uncertainties in optimization. The budgeted uncertainty set introduced by Bertsimas and Sim (2004) is a prevalent approach in this field due to its effectiveness in addressing a host of practical linear optimization problems under uncertainty, though challenges remain in multi-period decision models. In this work, we revisit this approach with a distinct angle: bypassing traditional support boundaries and introducing a robust satisficing model where preferences are conveyed through targets. At the core of our discussion is the cardinal fragility criterion, which evaluates a solution’s vulnerability to input deviations in reaching a target. Our analysis employs a prediction disparity metric rooted in the L1-norm, echoing Bertsimas and Sim (2004), but diverges with its boundary-free support. This fragility construct seamlessly integrates into a multi-attribute robust satisficing model, prioritizing alignment with predetermined attribute targets. By eliminating support constraints, we extend the boundaries of our computational reach. Our approach proficiently handles multiperiod models by employing direct and straightforward formulations, without introducing dual variables or resorting to approximations. Additionally, we present a method to optimize the parameters of the prediction disparity metric using prediction residuals, ensuring the desired confidence profile across various target attainment levels. We conducted two studies: one focusing on portfolio optimization, and the other using simulation to explore if heightened crashing costs can enhance the likelihood of on-time project completion.
Dr. Melvyn Sim is Professor and Provost's Chair at the Department of Analytics & Operations, NUS Business school. His research interests fall broadly under the categories of decision making and optimization under uncertainty with applications ranging from finance, supply chain management, healthcare to engineered systems. He is one of the active proponents of Robust Optimization and Satisficing, and has given keynote seminars at international conferences. Dr. Sim is currently serving as a Department Editor of MSOM, and as an associate editor for Operations Research, Management Science and INFORMS Journal on Optimization.
15:00 (CET)
Robust optimization is a prominent approach in optimization to deal
with uncertainties in the data of the optimization problem by hedging
against the worst-case realization of the uncertain event. Doing this
usually leads to a multilevel structure of the mathematical
formulation that is very similar to what we are used to consider in
bilevel optimization. Hence, these two fields are closely related
but the study of their combination is still in its infancy.
In this talk, we analyze the connections between different types of
robust problems (strictly robust problems with and without
decision-dependence of their uncertainty sets, min-max-regret
problems, and two-stage robust problems) as well as of bilevel problems
(optimistic problems, pessimistic problems, and robust bilevel
problems).
We hope that these results pave the way for a stronger connection
between the two fields---in particular to use both theory and
algorithms from one field in the other and vice versa.
Short CV
Since 1/2019: Full professor (W3) for “Nonlinear Optimization” at Trier University
4/2014-9/2018: Junior professor (W1) for “Optimization of Energy Systems” at the Friedrich-Alexander-Universität Erlangen-Nürnberg
1/2013: PhD in mathematics at the Leibniz Universität Hannover
10/2008: Diplom in mathematics (minor computer science) at the Leibniz Universität Hannover
Editorial Work
Since June 2022: Editorial Board member of the Journal of Optimization Theory and Applications
Since May 2022: Editorial Board member of Optimization Letters
Since 2021: Associate Editor of OR Spectrum
Since 2021: Associate Editor of the EURO Journal on Computational Optimization
Since 2011: Technical Editor of Mathematical Programming Computation
Awards
Optimization Letters best paper award for the year 2021
Marguerite Frank Award for the best EJCO paper in 2021
MMOR Best Paper Award 2021
1st price (category: scientific) of the ROADEF/EURO Challenge 2020
Howard Rosenbrock Prize 2020
EURO Excellence in Practice Award 2016
15:00 (CET)
University of Amsterdam and Massachusetts Institute of Technology
Title: A novel algorithm for a broad class of nonconvex optimization problems - and how this can be used in Robust Optimization
We propose a new global optimization approach for solving nonconvex optimization problems in which the nonconvex components are sums of products of convex functions. A broad class of nonconvex problems can be written in this way, such as concave minimization problems, difference of convex problems, and fractional optimization problems. Our approach exploits two techniques: first, we introduce a new technique, called the Reformulation-Perspectification Technique (RPT), to obtain a convex approximation of the considered nonconvex continuous optimization problem. Next, we develop a spatial Branch and Bound scheme, leveraging RPT, to obtain a global optimal solution. Numerical experiments on four different convex maximization problems, a quadratic constrained quadratic optimization problem, and a dike height optimization problem demonstrate the effectiveness of the proposed approach. In particular, our approach solves more instances to global optimality for the considered problems than BARON and SCIP. Moreover, for problem instances of larger dimension our approach outperforms both BARON and SCIP on computation time for most problem instances, while for smaller dimension BARON overall performs better on computation time. Finding the worst-case scenario in Robust Optimization is often a nonconvex optimization problem. We therefore discuss how this novel algorithm can be used to solve robust and adaptive nonlinear optimization problems.
Bio:
Dick den Hertog is a professor of Operations Research at the University of Amsterdam. His research interests cover various fields in prescriptive analytics, in particular linear and nonlinear optimization. In recent years his main focus has been on robust optimization, and recently he started research on Optimization with Machine Learning. He is also active in applying the theory in real-life applications. In particular, he is interested in applications that contribute to a better society. He received the INFORMS Franz Edelman Award twice: in 2013 for his research on optimal flood protection, and in 2021 for his research on optimizing the food supply chain for the UN World Food Programme. Currently, he is doing research to develop better optimization models and techniques for cancer treatment together with researchers from Harvard Medical School and Massachusetts General Hospital (Boston, USA), and he is involved in research to optimize the locations of healthcare facilities, e.g., in Timor-Leste and Vietnam, together with the World Bank. He has been Visiting Professor at MIT for several years now. He is Science-to-Impact Director of the Analytics for a Better World Institute, which he co-founded in 2022.
Bio:
Thodoris Koukouvinos is a fourth year PhD student at MIT, Operations Research Center, advised by Prof. Dimitris Bertsimas. Before joining MIT, he studied Applied Mathematics and Physics in the National Technical University of Athens, in Greece. His research interests include ethics in machine learning, robust optimization and non convex optimization.
15:00 (CET)
University of Graz
Title: On the Complexity of Robust Multi-Stage Problems in the Polynomial Hierarchy
Abstract:
We propose a model for recoverable robust optimization with commitment. Given a combinatorial optimization problem and uncertainty about elements that may fail, we ask for a robust solution that, after the failing elements are revealed, can be augmented in a limited way. However, we commit to preserve the remaining elements of the initial solution. We consider different polynomial-time solvable combinatorial optimization problems and settle the computational complexity of their robust counterparts with commitment. We show for the weighted matroid basis problem that an optimal solution to the nominal problem is also optimal for its robust counterpart. Indeed, matroids are provably the only structures with this strong property. Robust counterparts of other problems are NP-hard such as the matching and the stable set problem, even in bipartite graphs. However, we establish polynomial-time algorithms for the robust counterparts of the unweighted stable set problem in bipartite graphs and the weighted stable set problem in interval graphs, also known as the interval scheduling problem.
Joint work with Nicole Megow, Komal Muluk, and Britta Peis.
Abstract:
We study the computational complexity of multi-stage robust optimization problems. Such problems are formulated with alternating min/max quantifiers and therefore naturally fall into a higher stage of the polynomial hierarchy. Despite this, almost no hardness results with respect to the polynomial hierarchy are known.
In this work, we examine the hardness of robust two-stage adjustable and robust recoverable optimization with budgeted uncertainty sets. Our main technical contribution is the introduction of a technique tailored to prove Σp3-hardness of such problems. We highlight a difference between continuous and discrete budgeted uncertainty: In the discrete case, indeed a wide range of problems becomes complete for the third stage of the polynomial hierarchy; in particular, this applies to the TSP, independent set, and vertex cover problems. However, in the continuous case this does not happen and problems remain in the first stage of the hierarchy. Finally, if we allow the uncertainty to not only affect the objective, but also multiple constraints, then this distinction disappears and even in the continuous case we encounter hardness for the third stage of the hierarchy. This shows that even robust problems which are already NP-complete can still exhibit a significant computational difference between column-wise and row-wise uncertainty.
Joint work with Marc Goerigk and Lasse Wulf
Bio:
Felix Hommelsheim is a postdoc at the group of Prof. Dr. Nicole Megow at the University of Bremen. He received his PhD at TU Dortmund University, where he was supervised by Prof. Dr. Christoph Buchheim. Felix' research focuses on the design of efficient (approximation) algorithms for combinatorial optimization problems under uncertainty. In particular he is interested in algorithms for survivable network design problems.
Bio:
Stefan Lendl is COO and co-founder of s2 data & algorithms and postdoctoral researcher at University of Graz. He studied mathematics at Graz University of Technology and obtained his PhD under the supervision of Bettina Klinz in 2019. His research interests are both in applications and the theory of robust optimization. His theoretical interest is in obtaining a deeper understanding of the computational complexity of robust optimization problems, using techniques like the polynomial hierarchy and parameterized complexity. His start-up s2 data & algorithms develops MasterScheduler, a software solution applying robust optimization to holistic supply chain optimization.
15:00 (CET)
The facility location problem is a well-known and extensively studied problem in the field of Operations Research, attracting significant attention from researchers and practitioners. In this talk, we specifically address robust facility location problems, which provide a valuable approach to tackle the complexities arising from uncertain demand. We consider a two-stage robust facility location problem on a metric under uncertain demand. The decision-maker needs to decide on the (integral) units of supply for each facility in the first stage to satisfy an uncertain second-stage demand, such that the sum of the first-stage supply cost and the worst-case cost of satisfying the second-stage demand over all scenarios is minimized. The second-stage decisions are only assignment decisions without the possibility of adding recourse supply capacity. This makes our model different from existing work on two-stage robust facility location and set covering problems. We consider an implicit model of uncertainty with an exponential number of demand scenarios specified by an upper bound k on the number of second-stage clients. In an optimal solution, the second-stage assignment decisions depend on the scenario; surprisingly, we show that restricting to a fixed (static) fractional assignment for each potential client irrespective of the scenario gives us an O(log k/log log k)-approximation for the problem. Moreover, the best such static assignment can be computed efficiently, giving us the desired guarantee. Our proposed static assignment policies showcase their effectiveness in achieving strong approximate solutions both theoretically and numerically. Moreover, these policies offer a promising approach to address uncertain demand in diverse facility location scenarios. This is joint work with Vineet Goyal and David Shmoys.
Omar El Housni is an assistant professor at Cornell Tech and in the School of Operations Research and Information Engineering at Cornell University. He is also a Field Member of the Center of Applied Mathematics at Cornell. Omar received a PhD in Operations Research from Columbia University and a BS and MS in Applied Mathematics from Ecole Polytechnique In Paris. Omar’s research focuses on the design of robust and efficient algorithms for sequential dynamic optimization problems under uncertainty with applications in revenue management, assortment optimization, facility location problems and online matching markets.
15:00 (CET)
The University of Texas at Austin
Title: Contextual Reinforcement Learning when we don't know the contexts
Contextual Bandits and more generally, contextual reinforcement learning, studies the problem where the learner relies on revealed contexts, or labels, to adapt learning and optimization strategies. What can we do when those contexts are missing?
Statistical learning with missing or hidden information is ubiquitous in many theoretical and applied problems. A basic yet fundamental setting is that of mixtures, where each data point is generated by one of several possible (unknown) processes. In this talk, we are interested in the dynamic decision-making version of this problem. At the beginning of each (finite length, typically short) episode, we interact with an MDP drawn from a set of M possible MDPs. The identity of the MDP for each episode -- the context -- is unknown.
We review the basic setting of MDPs and Reinforcement Learning, and explain in that framework why this class of problems is both important and challenging. Then, we outline several of our recent results in this area, as time permits. We first show that without additional assumptions, the problem is statistically hard in the number of different Markov chains: finding an epsilon-optimal policy requires exponentially (in M) many episodes. We then study several special and natural classes of LMDPs. We show how ideas from the method-of-moments, in addition to the principle of optimism, can be applied here to derive new, sample efficient RL algorithms in the presence of latent contexts.
Constantine Caramanis is a Professor in the ECE department of The University of Texas at Austin. He received a PhD in EECS from The Massachusetts Institute of Technology, in the Laboratory for Information and Decision Systems (LIDS), and an AB in Mathematics from Harvard University. His current research interests focus on decision-making in large-scale complex systems, with a focus on statistical learning and optimization.
15:00 (CET)
Friedrich-Alexander-Universität Erlangen-Nürnberg
Title: (Data-driven) distributional robustness over time: How to 'learn' relevant uncertainties together with robust decisions?
In many applications, determining optimized solutions that are hedged against uncertainties is mandatory. Classical stochastic optimization approaches, however, may be prone to the disadvantage that the underlying probability distributions are unknown or uncertain themselves. On the other hand, standard robust optimization may lead to conservative results as it generates solutions that are feasible regardless of how uncertainties manifest themselves within predefined uncertainty sets. Distributional robustness (DRO) lies at the interface of robust and stochastic optimization as it robustly protects against uncertain distributions. DRO currently receives increased attention and is considered as an alternative that can lead to less conservative but still uncertainty-protected solutions. In this talk, we will review some approaches in the area of distributional robustness. We will explain some recent developments that use scenario observations to learn more about the uncertain distributions over time, together with best possible protected solutions. The goal is to incorporate new information when it arrives and to improve our solution without resolving the entire robust counterpart. We also present computational results showing that our approach based on online learning leads to a practically efficient solution algorithm for DRO over time. This is joint with K. Aigner, A. Bärmann, K. Braun, F. Liers, S. Pokutta, O. Schneider, K. Sharma, and S. Tschuppik (INFORMS Journal on Optimimization (2023)).
Frauke studied mathematics at the University of Cologne and obtained her doctoral degree in computer science in 2004. After research stays in Rome and in Paris, she headed a DFG-funded Emmy Noether Junior Research Group in Cologne. In 2010, she obtained her Habilitation and became professor in Applied Mathematics in the Department of Mathematics at the Friedrich-Alexander Universität Erlangen-Nürnberg in 2012. Her research is in optimization under uncertainty, including combinatorial, mixed-integer and nonlinear aspects. Integrating knowledge from data analysis is of high interest. She has coordinated the European Graduate College MINOA (Marie-Curie ITN, 2018-2021) and is Principal Investigator in the Collaborative Reseach Centers CRC 1411 and CRC/TRR 154. Since 04/2023, she is spokesperson of the CTC/TRR 154.
15:00 (CET)
University of Passau
Title: Scenario Reduction for Robust Optimization with Discrete Uncertainty
Robust optimization typically follows a worst-case perspective, where a single scenario may determine the objective value of a given solution. Accordingly, it is a challenging task to reduce the size of an uncertainty set without changing the resulting objective value too much. On the other hand, robust optimization problems with many scenarios tend to be hard to solve, in particular for two-stage problems. Hence, a reduced uncertainty set may be central to find solutions in reasonable time. We propose scenario reduction methods that give guarantees on the performance of the resulting robust solution. Scenario reduction problems for one- and two-stage robust optimization are framed as optimization problems that only depend on the uncertainty set and not on the underlying decision making problem. Experimental results indicate that objective values for the reduced uncertainty sets are closely correlated to original objective values, resulting in better solutions than when using general-purpose clustering methods such as K-means.
Authors: Marc Goerigk and Mohammad Khosravi
Marc Goerigk is Professor for Business Decisions and Data Science at the University of Passau. His research focuses mainly on robust decision making for combinatorial problems, in particular regarding aspects of complexity and making the best use of data. Marc holds a PhD in mathematics from the University of Göttingen. Before joining his colleagues in Passau, he was Professor at the University of Siegen, lecturer at Lancaster University and post-doctoral researcher at the University of Kaiserslautern.
17:00 (CET)
University of Southern California
Title: Learning Optimal Classification Trees Robust to Distribution Shifts
We consider the problem of learning classification trees that are robust to distribution shifts between training and testing/deployment data. This problem arises frequently in high stakes settings such as public health and social work where data is often collected using self-reported surveys which are highly sensitive to e.g., the framing of the questions, the time when and place where the survey is conducted, and the level of comfort the interviewee has in sharing information with the interviewer. We propose a method for learning optimal robust classification trees based on mixed-integer robust optimization technology. In particular, we demonstrate that the problem of learning an optimal robust tree can be cast as a single-stage mixed-integer robust optimization problem with a highly nonlinear and discontinuous objective. We reformulate this problem equivalently as a two-stage linear robust optimization problem for which we devise a tailored solution procedure based on constraint generation. We evaluate the performance of our approach on numerous publicly available datasets, and compare the performance to a regularized, non-robust optimal tree. We show an increase of up to 14.16% in worst-case accuracy and of up to 4.72% in average-case accuracy across several datasets and distribution shifts from using our robust solution in comparison to the non-robust one.
Phebe Vayanos is an Associate Professor of Industrial & Systems Engineering and Computer Science at the University of Southern California. She is also an Associate Director of CAIS, the Center for Artificial Intelligence in Society at USC. Her research is focused on Operations Research and Artificial Intelligence and in particular on optimization and machine learning. Her work is motivated by problems that are important for social good, such as those arising in public housing allocation, public health, and biodiversity conservation. Prior to joining USC, she was lecturer in the Operations Research and Statistics Group at the MIT Sloan School of Management, and a postdoctoral research associate in the Operations Research Center at MIT. She holds a PhD degree in Operations Research and an MEng degree in Electrical & Electronic Engineering, both from Imperial College London. She is a recipient of the NSF CAREER award and the INFORMS Diversity, Equity, and Inclusion Ambassador Program Award.
15:00 (CET)
University of Amsterdam
Title: Finding regions of counterfactual explanations via robust optimization
Imperial College London
Title: Differential Privacy via Distributionally Robust Optimization
Abstract:
Counterfactual explanations (CEs) are critical for detecting bias and improving the explainability of data-driven classification models. A counterfactual explanation is a minimal perturbed data point for which the decision of the model changes. However, most of the existing methods can only provide one CE, which may not be feasible for the user. In this talk, it will be presented an iterative method that calculates robust CEs. These are CEs that remain valid even after the features are slightly perturbed. The method provides a whole region of CEs, allowing the user to choose a suitable recourse to obtain a desired outcome. It will be discussed how to use algorithmic ideas from robust optimization and prove convergence results for the most common machine learning methods, including logistic regression, decision trees, random forests, and neural networks.
Abstract:
In recent years, differential privacy has emerged as the de facto standard for sharing statistics of datasets while limiting the disclosure of private information about the involved individuals. This is achieved by randomly perturbing the statistics to be published, which in turn leads to a privacy-accuracy trade-off: larger perturbations provide stronger privacy guarantees, but they result in less accurate statistics that offer lower utility to the recipients. Of particular interest are therefore optimal mechanisms that provide the highest accuracy for a pre-selected level of privacy. To date, work in this area has focused on specifying families of perturbations a priori and proving their asymptotic and/or best-in-class optimality.
In this paper, we develop a class of mechanisms that enjoy non-asymptotic and unconditional optimality guarantees. To this end, we formulate the mechanism design problem as an infinite-dimensional distributionally robust optimization problem. We show that this problem affords a strong dual, and we develop converging hierarchies of finite-dimensional upper and lower bounding problems. Our upper bounds correspond to implementable perturbations whose suboptimality can be bounded by our lower bounds. Both bounding problems can be solved within seconds via cutting plane techniques that exploit the inherent problem structure. Our numerical experiments demonstrate that our perturbations can outperform the previously best results from the literature on artificial as well as benchmark problems.
Co-authors: Huikang Liu & Wolfram Wiesemann
Bio:
Donato Maragno is a PhD candidate at the Department of Business Analytics, University of Amsterdam in the Netherlands. His research interest centers around exploring the integration of machine learning and optimization techniques. He has contributed to the development of OptiCL, an open-source tool that enables optimization with constraint learning.
Bio:
Aras Selvi is a PhD candidate in Operations Research at Imperial College Business School, where he is supervised by Wolfram Wiesemann as a member of the Models and Algorithms for Decision-Making under Uncertainty research group. His research interests are at the intersection of robust optimization, machine learning, and computational privacy. He is currently a fellow of The Alan Turing Institute as a recipient of the PhD Enrichment Scheme placement program. Aras is also affiliated with the Computational Optimization Group and the Data Science Institute of Imperial College London.
15:00 (CET)
Technion - Israel Institute of Technology
Title: An Algorithm for Maximizing a Convex Function Based on its Minimum
Abstract:
In this talk an algorithm for maximizing a convex function over a convex feasible set (a common problem in Robust Analysis) is proposed. The algorithm consists of two phases: in phase 1 a feasible starting point is obtained that is used in a Gradient Descent algorithm in phase 2. The main contribution of the paper is connected to phase 1; five different methods are used to approximate the original NP-hard problem of maximizing a convex function (MCF) by a tractable convex optimization problem. All the methods use the minimizer of the convex objective function in their construction. The performance of the overall algorithm is tested on a wide variety of MCF problems, demonstrating its efficiency.
Bio:
Aharon Ben-Tal is a Professor of Operations Research at the Technion – Israel Institute of Technology. He received his Ph.D. in Applied Mathematics from Northwestern University in 1973. He has been a Visiting Professor at the University of Michigan, University of Copenhagen, Delft University of Technology, MIT and CWI Amsterdam, Columbia and NYU. His interests are in Continuous Optimization, particularly nonsmooth and large-scale problems, conic and robust optimization, as well as convex and nonsmooth analysis. In recent years the focus of his research is on optimization problems affected by uncertainty; he is one if the founders of the Robust Optimization methodology to address these problems. In the last 20 years he has devoted much effort to engineering applications of optimization methodology and computational schemes. Some of the algorithms developed in the MINERVA Optimization Center are in use by Industry (Medical Imaging, Aerospace). He has published more than 135 papers in professional journals and co-authored three books. Prof. Ben-Tal was Dean of the Faculty of Industrial Engineering and Management at the Technion (1989-1992) and (2011-2014). He served in the editorial board of all major OR/Optimization journals. He gave numerous plenary and keynote lectures in international conferences.
In 2007 Professor Ben-Tal was awarded the EURO Gold Medal - the highest distinction of Operations Research within Europe.
In 2009 he was named Fellow of INFORMS.
In 2015 he was named Fellow of SIAM.
In 2016 he was awarded by INFORMS the Khachiyan Prize for Lifetime Achievement in the area of Optimization.
In 2017, the Operation Research Society of Israel (ORSIS) awarded him the Lifetime Achievement Prize.
As of February 2023 his work has over 33,000 citations (Google scholar).
15:00 (CET)
Massachusetts Institute of Technology
Title: Robust Optimization with Information Theory Inspired Uncertainty Sets and its Applications
Abstract
Classical approaches in robust optimization (RO) involve out-of-the-box uncertainty sets such as norm, budget or polyhedral sets. Apart from leveraging available data to cross-validate hyperparameters, out-of-the-box uncertainty sets seldom incorporate distributional information, so they inherently do not capture the real-world behavior of uncertain parameters.
Distributionally Robust Optimization (DRO), by contrast, satisfies constraints in expectation among a class of underlying distributions. We propose a new type of uncertainty set motivated by information theory, which incorporates the probability distributions directly. Initial computational tests with our approach yield reductions in the maximum and mean violation as compared to classical RO and DRO. We apply this novel uncertainty set in a real-world setting of the Hartford Hospital's Bone and Joint Institute to optimize the patient census while considering the uncertainty in the rest times of patients after surgery. We generate a timetable for surgeons that reduces the monthly census by up to 10%, outperforming both out-of-the-box uncertainty sets and DRO. (Joint work with Benjamin Boucher, MIT).
Bio
Dimitris Bertsimas is the Boeing Professor of Operations Research and the Associate Dean of Business Analytics at the Massachusetts Institute of Technology. He is a member of the US National Academy of Engineering, an INFORMS fellow, recipient of the John von Neumann Theory Prize, the Frederick W. Lanchester Prize, the Erlang Prize, finalist of the Franz Edelman Prize four times, and the INFORMS President’s Award, among many other research and teaching awards, supervisor of 88 completed and 25 current doctoral theses, editor of the INFORMS Journal on Optimization, co-author of seven textbooks and co-founder of ten analytics companies and two foundations.
15:00 (CET)
Imperial College London
Title: Randomized Assortment Optimization
Abstract
When a firm selects an assortment of products to offer to customers, it uses a choice model to anticipate their probability of purchasing each product. In practice, the estimation of these models is subject to statistical errors, which may lead to significantly suboptimal assortment decisions. Recent work has addressed this issue using robust optimization, where the true parameter values are assumed unknown and the firm chooses an assortment that maximizes its worst-case expected revenues over an uncertainty set of likely parameter values, thus mitigating estimation errors. In this talk, we introduce the concept of randomization into the robust assortment optimization literature. We show that the standard approach of deterministically selecting a single assortment to offer is not always optimal in the robust assortment optimization problem. Instead, the firm can improve its worst-case expected revenues by selecting an assortment randomly according to a prudently designed probability distribution. We demonstrate this potential benefit of randomization both theoretically in an abstract problem formulation as well as empirically across three popular choice models: the multinomial logit model, the Markov chain model, and the preference ranking model. We show how an optimal randomization strategy can be determined exactly and heuristically. Besides the superior in-sample performance of randomized assortments, we demonstrate improved out-of-sample performance in a data- driven setting that combines estimation with optimization. Our results suggest that more general versions of the assortment optimization problem—incorporating business constraints, more flexible choice models and/or more general uncertainty sets—tend to be more receptive to the benefits of randomization.
Bio
Wolfram Wiesemann is Professor of Analytics & Operations as well as the head of the Analytics, Marketing & Operations department at Imperial College Business School. His research interests evolve around decision-making under uncertainty, with applications to supply chain management, healthcare and energy. Wolfram is an elected member of the boards of the Mathematical Optimization Society and the Stochastic Programming Society, and he serves as an area editor for Operations Research Letters as well as an associate editor for Mathematical Programming, Operations Research, Manufacturing & Service Operations Management and SIAM Journal on Optimization.