15:00 (CET)
RWTH Aachen University
A Note on Piecewise Affine Decision Rules for Robust, Stochastic, and Data-Driven Optimization
Abstract:
Multi-stage decision-making under uncertainty, where decisions are taken under sequentially revealing uncertain problem parameters, is often essential to faithfully model managerial problems. Given the significant computational challenges involved, these problems are typically solved approximately. This short note introduces an algorithmic framework that revisits a popular approximation scheme for multi-stage stochastic programs by Georghiou et al. (2015) and improves upon it to deliver superior policies in the stochastic setting, as well as extend its applicability to robust optimization and a contemporary Wasserstein-based data-driven setting. We demonstrate how the policies of our framework can be computed efficiently, and we present numerical experiments that highlight the benefits of our method.
Abstract:
A Semi-Infinite Programming (SIP) problem is an optimization problem with a finite number of decision variables, and an infinite number of parametrized constraints. Standard SIP problems assume the parameter set is independent of the decision variables, whereas in generalized SIP problems, this set may depend on the decisions. Robust Optimization can be viewed as a special case of SIP (with decision-dependent uncertainty sets in the generalized setting). Despite this connection, the two fields have developed almost independently, with separate methods and applications. This talk bridges these perspectives by focusing on advancements in solving specific classes of SIP problems, which naturally apply to robust optimization scenarios.
We focus on convex standard SIP problems where the separation problem, i.e., the problem of finding the constraint that is the most violated by a given point, is not necessarily convex. Based on the Lagrangian dual of this problem, we derive a convex and tractable restriction of the considered SIP problem. We state sufficient conditions for the optimality of this restriction. If these conditions are not met, the restriction is enlarged through an Inner-Outer Approximation Algorithm, which converges to the value of the original SIP problem. This new algorithmic approach is compared with the classical Cutting Plane algorithm, for which a new rate of convergence, directly related to the iteration index, is proposed under specific assumptions.
Bio:
Simon Thomä is a Ph.D. student at the Professorship of Business Analytics and Intelligent Systems at TUM School of Management and the Chair of Operations Management at RWTH Aachen. His research focuses on the development of robust and adjustable optimization techniques with applications in network and supply chain design. He earned a Bachelor and Master of Science in Mathematics from the University of Bonn and a Master of Science in Business Analytics from Imperial College London.
Bio:
Martina Cerulli is a Researcher at the Department of Computer Science at the University of Salerno. She obtained her Master’s Degree in Industrial Engineering from La Sapienza University of Rome in July 2018, and, in December 2021, she defended her PhD (funded by the Marie Curie scholarship “MINOA”) at École Polytechnique de Paris, with a thesis on “Bilevel Optimization and Applications,” supervised by Prof. Claudia D’Ambrosio and Prof. Leo Liberti. After her PhD, Martina worked as a postdoctoral researcher in the ESSEC Business School, collaborating with Professors Ivana Ljubic and Claudia Archetti. In 2022 she was named one of the YoungWomen4OR by the WISDOM forum of the EURO association. Her primary research interests include bilevel optimization, mixed-integer programming, and optimization on graphs, with applications in areas such as aircraft conflict resolution, optimal power flow, social network analysis and last-mile delivery.
15:00 (CET)
Vrije Universiteit Amsterdam
Exact and Approximate Schemes for Robust Optimization Problems with Decision-Dependent Information Discovery
Abstract:
In uncertain optimization problems with decision-dependent information discovery, the decision-maker can influence when information is revealed, unlike the classic setting where uncertain parameters are revealed according to a prescribed filtration. This work examines two-stage robust optimization problems with decision-dependent information discovery, focusing on uncertainty in the objective function. We introduce the first exact algorithm for this class of problems and enhance the existing K-adaptability approximation by strengthening its formulation. Our approaches are tested on robust orienteering and shortest-path problems. We study the interplay between information and adaptability, demonstrating that the exact solution method often outperforms the K-adaptability approximation. Moreover, our experiments highlight the benefits of the strengthened K-adaptability formulation over the classical one, even in settings with decision-independent information discovery.
Bio:
Rosario Paradiso is an Assistant Professor in the Department of Operations Analytics at Vrije Universiteit Amsterdam in the Netherlands. He earned his PhD from the Department of Mathematics and Computer Science at the University of Calabria in Italy in 2020. Rosario's research primarily focuses on exact algorithms for combinatorial optimization problems, both deterministic and under uncertainty, with applications in supply chain management.
15:00 (CET)
Abstract:
Problem uncertainty typically limits how well decisions can be tailored to the problem at hand but often can not be avoided. The availability of large quantities of data in modern applications however presents an exciting opportunity to nevertheless make better informed decisions. Distributional robust optimization (DRO) has recently gained prominence as a paradigm for making data-driven decisions which are protected against adverse overfitting effects. We justify the effectiveness of this paradigm by pointing out that certain DRO formulations indeed are statistically efficient. Furthermore, DRO formulations can also be tailored to protect decisions against overfitting even when working with messy corrupted data. Finally, we discuss a hitherto hidden relation between DRO and classical robust statistics.
Bio:
Bart obtained his PhD in control theory at ETH Zurich in 2016 under the supervision of Prof. Manfred Morari. He was a SNSF postdoctoral researcher at the Operations Research Center at MIT from 2018 to 2020 and Assistant Professor at the MIT Sloan School of Management from 2018 to 2024. As of 2024 Bart is a tenured senior researcher in the Stochastics Group at CWI Amsterdam. Bart's research is located on the interface between optimization and machine learning. He particularly enjoy developing novel mathematical methodologies and algorithms with which we can turn data into better decisions. Although most of his research is methodological, he does enjoy applications related to problems in renewable energy.
15:00 (CET)
University of Edinburg
Two-stage and Lagrangian Dual Decision Rules for Multistage Adaptive Robust Optimization
Abstract:
We design primal and dual bounding methods for general multistage adaptive robust optimization (MSARO) problems. From the primal perspective, this is achieved by applying decision rules that restrict the functional forms of only a certain subset of decision variables resulting in an approximation of MSARO as a two-stage adjustable robust optimization (2ARO) problem. We leverage the 2ARO literature in the solution of this approximation. From the dual perspective, decision rules are applied to the Lagrangian multipliers of a Lagrangian dual of MSARO, resulting in a two-stage stochastic optimization problem. As the quality of the resulting dual bound depends on the distribution chosen when developing the dual formulation, we define a distribution optimization problem with the aim of optimizing the obtained bound, and we develop solution methods tailored to the nature of the recourse variables. Computational experiments illustrate the potential value of our bounding techniques compared to the existing methods.
(This is a joint work with Maryam Daryalal and Ayse N. Arslan.)
Bio:
Merve Bodur is an Associate Professor in the School of Mathematics at the University of Edinburgh. She obtained her Ph.D. from the University of Wisconsin-Madison, her B.S. in Industrial Engineering and her B.A. in Mathematics from Bogazici University, Turkey. Her main research area is optimization under uncertainty, primarily for discrete optimization problems, with applications in a variety of areas such as scheduling, transportation, healthcare, telecommunications, and power systems. She serves on the editorial boards of Operations Research Letters, Omega, and INFOR. She is currently the Vice Chair/Chair-Elect of the INFORMS Computing Society, serves on the Committee on Stochastic Programming, and is a former Vice Chair of the INFORMS Optimization Society.
15:00 (CET)
Abstract:
In the shortest path problem, we are given a directed graph G with nonnegative arc costs, and we seek an s-t path in G of the smallest total cost. This fundamental combinatorial problem can be solved efficiently in general digraphs. In some applications, a path computed in the first stage can be modified to some extent after the costs in G have been changed. Therefore, knowing the new cost scenario, we compute another (second-stage) path in some prescribed neighborhood of the first-stage path, so that the total cost of both paths is minimal. Typically, the second-stage arc costs are uncertain while the first-stage path is computed, and we seek a solution that minimizes the total cost under the worst second-stage cost realization. We use interval uncertainty sets to model the uncertain second-stage arc costs, which is very common in robust optimization. The recoverable robust shortest path problem under the interval uncertainty has been extensively studied since the work of Busing (2011, 2012). In this talk, some new results in this area are presented. We will characterize the complexity of the problem, depending on the class of the input digraph (general, acyclic, arc series-parallel), type of the neighborhood (arc inclusion, arc exclusion, arc symmetric difference) and structure of the interval uncertainty set (discrete budgeted, continuous budgeted). We will also state some open problems in this area.
Bio:
Adam Kasperski is a Professor at the Wroclaw University of Science and Technology. His scientific interests focus on robust discrete optimization problems. His work aims to better understand the computational complexity of different approaches in this area. He is currently a head of research project entitled "Decision making under risk and uncertainty - concepts, methods, and applications", at the Wroclaw University of Science and Technology.
15:00 (CET)
Delft University of Technology
Robust Optimal Classification Trees
Friedrich-Alexander-Universität Erlangen-Nürnberg
Towards Robust Interpretable Surrogates for Optimization
Abstract:
Decision trees are a popular choice of interpretable model, but just like neural networks, they suffer from adversarial examples: adversarial perturbations that result in misclassifications. Existing algorithms for fitting decision trees that are robust against adversarial examples are greedy heuristics and therefore lack approximation guarantees. In this work, we propose Robust Optimal Classification Trees (ROCT), a collection of methods to train decision trees that are optimally robust against user-specified attack models. We show that the min-max optimization problem that arises in adversarial learning can be solved using a single minimization formulation for decision trees with 0-1 loss. We propose such formulations in Mixed-Integer Linear Programming and Maximum Satisfiability, which widely available solvers can optimize. We also present a method that determines the upper bound on adversarial accuracy for any model using bipartite matching.
Abstract:
An important factor in the practical implementation of optimization models is the acceptance by the intended users. This is influenced, among other factors, by the interpretability of the solution process. At the same time, uncertainty in the parameters of optimization problems can complicate decision-making, a challenge often addressed through robust optimization techniques. Our work aims to integrate interpretability and robustness by developing optimization rules, in the form of decision trees, that are both resilient to parameter perturbations and inherently interpretable. We present a model and corresponding solution methods for this purpose, along with an evaluation of heuristic approaches.
Bio:
Daniël Vos is a PostDoc at the TU Delft algorithmics group. His work aims at making machine learning models more trustworthy by combining techniques from machine learning, robust optimization, and cyber security to improve interpretability, robustness, and privacy.
Bio:
Michael Hartisch currently serves as a temporary professor for Analytics & Mixed-Integer Optimization at Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany. Prior to this role, he was a temporary professor for Network and Data Science Management at the University of Siegen, Germany. His academic journey began with studies in mathematics at Friedrich-Schiller-University Jena, Germany, and he received his PhD at the University of Siegen. Michael’s research primarily focuses on optimization under uncertainty, as well as explainability and interpretability in optimization.
15:00 (CET)
George Mason University
New Prescriptive Analytics and Data-Driven Optimization Paradigms
Abstract:
We present prescriptive analytics paradigms for data-driven decision-making for complex systems and environments. Conventional data-driven decision-making methods assume access to plentiful, well-structured data to generate a prediction model, and subsequently, an optimization model is used to create a decision. However, real data can be incomplete (sparse, missing), scarce, partially observable, and time-varying. We present new learning and optimization frameworks to tackle emerging challenges caused by the complexity and limitations of data and difficulties inherent in data-system integration. Our approaches are based on amalgams of machine learning, distributionally robust/stochastic optimization, and combinatorial optimization.
Bio:
Hoda Bidkhori is an Assistant Professor at George Mason University, where her research focuses on the theory and applications of data analytics and data-driven optimization, with applications in logistics, supply chain management, and healthcare. Previously, she served as an Assistant Professor at the University of Pittsburgh. Prior to that, she was a postdoctoral researcher in Operations Research at MIT, where she also earned her Ph.D. in Applied Mathematics.
15:00 (CET)
Abstract:
We study decision problems under uncertainty, where the decision-maker has access to K data sources that carry biased information about the underlying risk factors. The biases are measured by the mismatch between the risk factor distribution and the K data-generating distributions with respect to an optimal transport (OT) distance. In this situation the decision-maker can exploit the information contained in the biased samples by solving a distributionally robust optimization (DRO) problem, where the ambiguity set is defined as the intersection of K OT neighborhoods, each of which is centered at the empirical distribution on the samples generated by a biased data source. We show that if the decision-maker has a prior belief about the biases, then the out-of-sample performance of the DRO solution can improve with K—irrespective of the magnitude of the biases. We also show that, under standard convexity assumptions, the proposed DRO problem is computationally tractable if either K or the dimension of the risk factors is kept constant.
Bio:
Daniel Kuhn is Professor of Operations Research at the College of Management of Technology at EPFL, where he holds the Chair of Risk Analytics and Optimization (RAO). His current research interests are focused on data-driven optimization, the development of efficient computational methods for the solution of stochastic and robust optimization problems and the design of approximation schemes that ensure their computational tractability. His work is primarily application-driven, the main application areas being engineered systems, machine learning, business analytics and finance. Before joining EPFL, Daniel Kuhn was a faculty member in the Department of Computing at Imperial College London (2007-2013) and a postdoctoral research associate in the Department of Management Science and Engineering at Stanford University (2005-2006). He is the editor-in-chief of Mathematical Programming.