Archive

June 14, 2024

15:00 (CET)


University of Groningen


Multi-Stage Robust Mixed-Integer Programming

Abstract:

Multi-stage mixed-integer robust optimization, in which decisions are taken sequentially as new information becomes available about the uncertain problem parameters, is a very versatile yet computationally challenging paradigm for decision making under uncertainty. In this talk, I will propose solution methods for  two-stage and multi-stage versions of such problems. For the first, we will construct piecewise constant decision rules by adaptively partitioning the uncertainty set. The partitions of this set are iteratively updated by separating so-called critical scenarios. We discuss how to identify such critical scenarios based on the branch-and-bound procedure for solving the current corresponding static problem. For multi-stage robust mixed-integer programs, we will consider a finite adaptability scheme, which allows us to decompose the multi-stage problem into a large number of much simpler two-stage problems. We discuss how these two-stage problems can be solved both exactly and approximately, and we report numerical results for route planning and location-transportation problems.

Bio:

Ward Romeijnders is a full professor within the Department of Operations at the University of Groningen. His research is focused on developing exact and approximate solution methods for integer optimization problems under uncertainty. This class of problems can be used to support decision making under uncertainty for a wide range of applications in, e.g., energy, healthcare, logistics, and finance. Ward publishes his research in journals such as Operations Research, Mathematical Programming, SIAM Journal on Optimization, European Journal of Operational Research, and INFORMS Journal on Computing, and he is Associate Editor of Mathematical Methods of Operations Research. He has received several research grants from the Netherlands Organisation for Scientific Research, the latest one for a project called "Discrete Decision Making under Uncertainty".

May 31, 2024

15:00 (CET)

Princeton University


Learning for Decision-Making under Uncertainty

Abstract:

We present two machine learning methods to learn uncertainty sets in decision-making problems affected by uncertain data.  In the first part, we introduce mean robust optimization (MRO), a framework that constructs data-driven uncertainty sets based on clustered data rather than on observed data points directly. By varying the number of clusters, MRO balances computational complexity and conservatism, and effectively bridges robust and Wasserstein distributionally robust optimization.  In the second part, we present LRO, a method to automatically learn the shape and size of the uncertainty sets in robust optimization.  LRO optimizes the performance across a family of parametric problems, while guaranteeing a target probability of constraint satisfaction.  It relies on a stochastic augmented Lagrangian method that differentiates the solutions of robust optimization problems with respect to the parameters of the uncertainty set. Our approach is very flexible, and it can learn a wide variety of uncertainty sets commonly used in practice.  We illustrate the benefits of our two machine learning methods on several numerical examples, obtaining significant reductions in computational complexity and conservatism, while preserving out-of-sample constraint satisfaction guarantees.

Bio:

Bartolomeo Stellato is an Assistant Professor in the Department of Operations Research and Financial Engineering at Princeton University. Previously, he was a Postdoctoral Associate at the MIT Sloan School of Management and Operations Research Center. He received a DPhil (PhD) in Engineering Science from the University of Oxford, a MSc in Robotics, Systems and Control from ETH Zurich, and a BSc in Automation Engineering from Politecnico di Milano. He is the developer of OSQP, a widely used solver in mathematical optimization. Bartolomeo Stellato's awards include the NSF CAREER Award, the Princeton SEAS Howard B. Wentz Jr. Faculty Award, the Franco Strazzabosco Young Investigator Award from ISSNAF, the Princeton SEAS Innovation Award in Data Science, the Best Paper Award in Mathematical Programming Computation, and the First Place Prize Paper Award in IEEE Transactions on Power Electronics. His research focuses on data-driven computational tools for mathematical optimization, machine learning, and optimal control.

May 17, 2024

15:00 (CET)


Trier University


Adjustable Robust Nonlinear Network Design under Demand Uncertainties

Abstract:

We study network design problems for nonlinear and nonconvex flow models under demand uncertainties. To this end, we apply the concept of adjustable robust optimization to compute a network design that admits a feasible transport for all, possibly infinitely many, demand scenarios within a given uncertainty set. For solving the corresponding adjustable robust mixed-integer nonlinear optimization problem, we show that a given network design is robust feasible, i.e., it admits a feasible transport for all demand uncertainties, if and only if a finite number of worst-case demand scenarios can be routed through the network. We compute these worst-case scenarios by solving polynomially many nonlinear optimization problems. Embedding this result for robust feasibility in an adversarial approach leads to an exact algorithm that computes an optimal robust network design in a finite number of iterations. Since all of the results are valid for general potential-based flows, the approach can be applied to different utility networks such as gas, hydrogen, or water networks. We finally demonstrate the applicability of the method by computing robust gas networks that are protected from future demand fluctuations.

Bio:

Johannes Thürauf is a postdoctoral researcher at Trier University since 11/2021. He studied Mathematics at the Friedrich-Alexander Universität (FAU) Erlangen-Nürnberg and received his M.Sc. degree in 2017. Afterward, he worked as a PhD student at the FAU Erlangen-Nürnberg and got his PhD at the end of 2021. His main research interests include robust and bilevel optimization (under uncertainty), for example with applications to networks. In particular, he is interested in the interplay of robust and bilevel optimization.

May 3, 2024

15:00 (CET)

University of Illinois


Distributionally Robust Path Integral Control

Abstract:

We address a continuous-time continuous-space stochastic optimal control problem where the controller lacks exact knowledge of the underlying diffusion process. Instead, the controller relies on a finite set of historical disturbance trajectories. To tackle this challenge, we propose a novel approach called Distributionally Robust Path Integral control (DRPI). This method utilizes distributionally robust optimization (DRO) to make the resulting policy robust against the unknown diffusion process. Notably, the DRPI scheme bears resemblance to risk-sensitive control, allowing us to leverage the path integral control (PIC) framework as an efficient solution method. We establish theoretical performance guarantees for the DRPI scheme, which facilitates the determination of an appropriate risk parameter for the risk-sensitive control. Through extensive validation, we demonstrate the effectiveness of our approach and its superiority over risk-neutral and risk-averse PIC policies in scenarios where the true diffusion process is unknown.

Bio:

Grani A. Hanasusanto is an Associate Professor of Industrial & Enterprise Systems Engineering at the University of Illinois Urbana-Champaign (UIUC). Before joining UIUC, he was an Assistant Professor at The University of Texas at Austin and a Postdoctoral Scholar at the College of Management of Technology at École Polytechnique Fédérale de Lausanne. He holds a Ph.D. in Operations Research from Imperial College London and an MSc in Financial Engineering from the National University of Singapore. His research focuses on developing tractable solution schemes for decision-making problems under uncertainty, with applications spanning operations management, energy systems, finance, data analytics, and machine learning. He is a recipient of the NSF CAREER award and the INFORMS Diversity, Equity, and Inclusion (DEI) Ambassadors Program award. He currently serves as a committee member of the INFORMS Diversity Community and an Associate Editor for Operations Research.

April 19, 2024

15:00 (CET)

Ayse Nur Arslan


Université de Bordeaux


Uncertainty reduction in (static) robust optimization 

Abstract:

In this talk, we focus on static robust optimization problems where the uncertainty set can be controlled through the actions of the decision maker known as decision-dependent uncertainty. Particularly, we consider an uncertainty reduction paradigm where the decision-maker can reduce upper bounds of uncertain parameters as a result of some proactive actions. This paradigm was recently proposed in the literature and the resulting problems were shown to be NP-Hard.  In this talk, we pay particular attention to the special case of robust combinatorial optimization problems, and show that they are also NP-Hard under the uncertainty reduction paradigm. Despite this discouraging result, we show that under some additional assumptions polynomial-time algorithms can be devised to solve robust combinatorial optimization problems with uncertainty reduction. We additionally provide insights into possible MILP reformulations in the general case and illustrate the practical relevance of our results on the shortest path instances from the literature.

Bio:

Ayse N. Arslan is a junior researcher at Inria Center of Bordeaux. Her research interests lie at the intersection of mixed-integer programming and optimization under uncertainty with a particular focus on stochastic and robust optimization and their applications. Some of her recent work includes decomposition algorithms for solving two-stage robust optimization problems and primal and dual decision rules for approximating multi-stage robust optimization problems.

March 22, 2024

15:00 (CET)

Karmel S. Shehadeh


Lehigh University


An inexact column-and-constraint generation method to solve two-stage robust optimization problems

Abstract:

In this talk, we will present our new inexact column-and-constraint generation (i-C&CG) method for solving challenging two-stage robust and distributionally robust optimization problems. In our i-C&CG method, the master problems only need to be solved to a prescribed relative optimality gap or time limit, which may be the only tractable option when solving large-scale and/or challenging problems in practice. We equip i-C&CG with a backtracking routine that controls the trade-off between bound improvement and inexactness. Importantly, this routine allows us to derive theoretical finite convergence guarantees for our i-C&CG method, demonstrating that it converges to the exact optimal solution under some parameter settings. Numerical experiments on a scheduling problem and a facility location problem demonstrate the computational advantages of our i-C&CG method over a state-of-the-art C&CG method.

Bio:

Dr. Karmel S. Shehadeh is an Assistant Professor in the Department of Industrial and Systems Engineering at Lehigh University. Before joining Lehigh, she was a presidential and dean postdoctoral fellow at Heinz College of Information Systems and Public Policy at Carnegie Mellon University. She holds a doctoral degree in Industrial and Operations Engineering from the University of Michigan, a master's degree in Systems Science and Industrial Engineering from Binghamton University, and a bachelor's in Biomedical Engineering from Jordan University of Science and Technology. Her research interests revolve around stochastic and robust optimization with applications to healthcare operations and analytics, facility location, transportation systems, and fair decision-making. Professional recognition of her work includes an INFORMS Minority Issues Forum Paper Award (winner), Junior Faculty Interest Group Paper Prize (finalist), Service Science Best Cluster Paper Award (finalist), and Lehigh's Alfred Noble Robinson Faculty Award.

March 8, 2024

15:00 (CET)


McGill University 


Robust Data-driven Prescriptiveness Optimization


HEC Montréal


End-to-end Conditional Robust Optimization

Abstract:

The abundance of data has led to the emergence of a variety of optimization techniques that attempt to leverage available side information to provide more anticipative decisions. The wide range of methods and contexts of application have motivated the design of a universal unitless measure of performance known as the coefficient of prescriptiveness. This coefficient was designed to quantify both the quality of contextual decisions compared to a reference one and the prescriptive power of side information. To identify policies that maximize the former in a data-driven context, this talk introduces a distributionally robust contextual optimization model where the coefficient of prescriptiveness substitutes for the classical empirical risk minimization objective. We present a bisection algorithm to solve this model, which relies on solving a series of linear programs when the distributional ambiguity set has an appropriate nested form and polyhedral structure. Studying a contextual shortest path problem, we evaluate the robustness of the resulting policies against alternative methods when the out-of-sample dataset is subject to varying amounts of distribution shift.

Abstract:

The field of Contextual Optimization (CO) integrates machine learning and optimization to solve decision making problems under uncertainty. Recently, a risk sensitive variant of CO, known as Conditional Robust Optimization (CRO), combines uncertainty quantification with robust optimization to promote safety and reliability in high stake applications. Exploiting modern differentiable optimization methods, we propose a novel end-to-end approach to train a CRO model in a way that accounts for both the empirical risk of the prescribed decisions and the quality of conditional coverage of the contextual uncertainty set that supports them. While guarantees of success for the latter objective are impossible to obtain from the point of view of conformal prediction theory, high quality conditional coverage is achieved empirically by ingeniously employing a logistic regression differentiable layer within the calculation of coverage quality in our training loss. We show that the proposed training algorithms produce decisions that outperform the traditional estimate then optimize approaches.

Bio:

Mehran Poursoltani is a Postdoctoral Fellow at the Desautels Faculty of Management and the Bensadoun School of Retail Management at McGill University. He earned his PhD in Management Science from HEC Montreal, where he was supervised by Erick Delage and Angelos Georghiou. His research interests include robust optimization, regret minimization, contextual optimization, decision analysis, and risk management.

Bio:

Abhilash Chenreddy is a PhD candidate at the Department of Decision Sciences, HEC Montreal supervised by Prof. Erick Delage. His research interests are at the intersection of machine learning and Robust Optimization, with a focus on designing uncertainty sets adaptable to contextual information.

February 23, 2024

15:00 (CET)

Arie Koster

RWTH Aachen University


Solving Robust Combinatorial Optimization Problems under Budget Uncertainty faster by Recycling Valid Inequalities or Tree Search

Abstract:

In combinatorial optimization problems under budget uncertainty the solution space is deterministic, but the objective is uncertain. By the seminal work of Bertsimas & Sim (2003, 2004), such problems can be solved in various ways: by separating constraints bounding the objective, by a compact reformulation, or by solving n+1 deterministic problems of the same type. Although the complexity of the problem does not increase in theory, in many cases the performance deteriorates significantly. 

In this talk, we show two ways to enhance the computation times of the compact reformulation: (i) recycling valid inequalities known for the deterministic problem yielding surprisingly strong valid inequalities for the robust problem and (ii) a tailored branch and bound algorithm performing a tree search on one of the dual variables. This is joint work with Timo Gersing and Christina Büsing.

Bio:

Arie M.C.A. Koster is since 2009 a Professor of Mathematics at RWTH Aachen University. Before, he received his PhD from Maastricht University and worked at Zuse Institute Berlin and University of Warwick. His research interests include integer linear programming, polyhedral combinatorics, robust optimization, algorithmic graph theory, and network optimization. He has published over 100 research papers and is on the editorial board of Annals of Operations Research, INFORMS Journal on Computing, and Networks.

February 9, 2024

15:00 (CET)

Eindhoven University of Technology


Title: Optimal robust inventory management with volume flexibility: Matching capacity and demand with the lookahead peak-shaving policy

Abstract:

We study inventory control with volume flexibility: A firm can replenish using period-dependent base capacity at regular sourcing costs and access additional supply at a premium. The optimal replenishment policy is characterized by two period-dependent base-stock levels but determining their values is not trivial, especially for nonstationary and correlated demand. We propose the Lookahead Peak-Shaving policy that anticipates and peak shaves orders from future peak-demand periods to the current period, thereby matching capacity and demand. Peak shaving anticipates future order peaks and partially shifts them forward. This contrasts with conventional smoothing, which recovers the inventory deficit resulting from demand peaks by increasing later orders. Our contribution is threefold. First, we use a novel iterative approach to prove the robust optimality of the Lookahead Peak-Shaving policy. Second, we provide explicit expressions of the period-dependent base-stock levels and analyze the amount of peak shaving. Finally, we demonstrate how our policy outperforms other heuristics in stochastic systems. Most cost savings occur when demand is nonstationary and negatively correlated, and base capacities fluctuate around the mean demand. Our insights apply to several practical settings, including production systems with overtime, sourcing from multiple capacitated suppliers, or transportation planning with a spot market. Applying our model to data from a manufacturer reduces inventory and sourcing costs by 6.7%, compared to the manufacturer's policy without peak shaving.

This is a joint work with Joren Gijsbrechts, Robert N. Boute, Jan A. Van Mieghem.

Bio:

Christina Imdahl is Assistant Professor of Machine Learning in Operations Management at the OPAC group of Eindhoven University of Technology (TU/e). Christina's research is focuses on decision-making in operations management. She develops models that are motivated by operational practice and explores how we can leverage data to make better decisions. In her research, she combines the use of stochastic and robust optimization with empirical analyses to gain insights into performance.

December 15, 2023

15:00 (CET)

University of Illinois at Chicago  


Title: Improving the Security of United States Elections with Robust Optimization

Abstract:

For more than a century, election officials across the United States have inspected voting machines before elections using a procedure called Logic and Accuracy Testing (LAT). This procedure consists of election officials casting a test deck of ballots into each voting machine and confirming the machine produces the expected vote total for each candidate.

 

In this talk, I will bring a scientific perspective to LAT by introducing the first formal approach to designing test decks with rigorous security guarantees. Specifically, we propose using robust optimization to find test decks that are guaranteed to detect any voting machine misconfiguration that would cause votes to be swapped across candidates. Out of all the test decks with this security guarantee, the robust optimization problem yields the test deck with the minimum number of ballots, thereby minimizing implementation costs for election officials. To facilitate deployment at scale, we developed a practical exact algorithm for solving our robust optimization problems based on mixed-integer optimization and the cutting plane method.

 

In partnership with the Michigan Bureau of Elections, we retrospectively applied our robust optimization approach to all 6928 ballot styles from Michigan's November 2022 general election; this retrospective study reveals that the test decks with rigorous security guarantees obtained by our approach require, on average, only 1.2% more ballots than current practice. Our robust optimization approach has since been piloted in real-world elections by the Michigan Bureau of Elections as a low-cost way to improve election security and increase public trust in democratic institutions.

 

This is joint work with Alex Halderman (Michigan) and Braden Crimmins (Stanford and Michigan), and our preprint is available at https://arxiv.org/pdf/2308.02306.pdf.


Bio:

Brad Sturt an assistant professor of business analytics at the University of Illinois Chicago. His current research focuses on the application of robust optimization to revenue management and public policy. He is also broadly interested in using robust optimization to design tractable approaches for dynamic optimization problems. His research has received several recognitions, including second place in the INFORMS Junior Faculty Interest Group (JFIG) Paper Competition and second place in the INFORMS George Nicholson Student Paper Competition. 

December 1, 2023

15:00 (CET)

Esther Julien

TU Delft


Title: Machine Learning for K-adaptability in Two-stage Robust Optimization

Justin Dumouchelle

University of Toronto


Title: Neur2RO: Neural Two-Stage Robust Optimization

Abstract:

Two-stage robust optimization problems constitute one of the hardest optimization problem classes. One of the solution approaches to this class of problems is K-adaptability. This approach simultaneously seeks the best partitioning of the uncertainty set of scenarios into K subsets and optimizes decisions corresponding to each of these subsets.  In the general case, it is solved using the K-adaptability branch-and-bound algorithm, which requires the exploration of exponentially growing solution trees. To accelerate finding high-quality solutions in such trees, we propose a machine learning-based node selection strategy. In particular,  we construct a feature engineering scheme based on general two-stage robust optimization insights that allows us to train our machine learning tool on a database of resolved B&B trees, and to apply it as-is to problems of different sizes and/or types. We experimentally show that using our learned node selection strategy outperforms a  vanilla, random node selection strategy when tested on problems of the same type as the training problems, also in case the K-value or the problem size differs from the training ones.  This work is in collaboration with Krzysztof Postek (BCG) and Ilker Birbil (UvA).

Abstract:

Robust optimization provides a mathematical framework for modeling and solving decision-making problems under worst-case uncertainty. This work addresses two-stage robust optimization (2RO) problems (also called adjustable robust optimization), wherein first-stage and second-stage decisions are made before and after uncertainty is realized, respectively. This results in a nested min-max-min optimization problem which is extremely challenging computationally, especially when the decisions are discrete. We propose Neur2RO, an efficient machine learning-driven instantiation of column-and-constraint generation (CCG), a classical iterative algorithm for 2RO. Specifically, we learn to estimate the value function of the second-stage problem via a novel neural network architecture that is easy to optimize over by design. Embedding our neural network into CCG yields high-quality solutions quickly as evidenced by experiments on two 2RO benchmarks, knapsack and capital budgeting. For knapsack, Neur2RO finds solutions that are within roughly of the best-known values in a few seconds compared to the three hours of the state-of-the-art exact branch-and-price algorithm; for larger and more complex instances, Neur2RO finds even better solutions. For capital budgeting, Neur2RO outperforms three variants of the -adaptability algorithm, particularly on the largest instances, with a 5 to 10-fold reduction in solution time. 

Our code and data are available at https://github.com/khalil-research/Neur2RO.

Bio:

Esther is a PhD student at TU Delft. Her interests lie in embedding machine learning techniques into algorithm design to obtain faster/better results, currently for projects in robust optimization and phylogenetics.

Bio:

Justin is a PhD candidate at the University of Toronto. His research interests are at the intersection of machine learning and operations research, with a current focus on developing learning-based algorithms for discrete optimization, stochastic programming, and robust optimization.

November 17, 2023

15:00 (CET)

Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM) 


Title: Combinatorial Robust Optimization with Decision-Dependent Information Discovery and Polyhedral Uncertainty

Abstract: 

Given a nominal combinatorial optimization problem, we

consider a robust two-stages variant with polyhedral cost uncertainty,

called DDID. In the first stage, DDID selects a subset of uncertain cost

coefficients to be observed, and in the second-stage, DDID selects a

solution to the nominal problem, where the remaining cost coefficients

are still uncertain. Given a compact linear programming formulation for

the nominal problem, we provide a mixed-integer linear programming

(MILP) formulation for DDID. The MILP is compact if the number of

constraints describing the uncertainty polytope other than lower and

upper bounds is constant. In that case, the formulation leads to

polynomial-time algorithms for DDID when the number of possible

observations is polynomially bounded. We extend this formulation to more

general nominal problems through column generation and constraint

generation algorithms. We illustrate our reformulations and algorithms

numerically on the selection problem, the orienteering problem, and the

spanning tree problem.


Link to paper: https://hal.science/hal-04097679/ 

Bio:

Michael Poss is a senior research fellow at the CNRS, in the LIRMM laboratory. His current research focuses mainly on robust combinatorial optimization. He his the founding managing editor of the Open Journal of Mathematical Optimization (https://ojmo.centre-mersenne.org/).

November 3, 2023

15:00 (CET)

University of Amsterdam


Title: An exact method for a class of robust nonlinear optimization problems. 


Trier University


Title: Using Column Generation in Column-and-Constraint Generation for Adjustable Robust Optimization 


Abstract:

We propose a novel exact approach to solve optimization problems with robust nonlinear constraints that are neither convex nor concave but sums of products of linear times concave functions with respect to the uncertain parameters. The proposed approach combines a cutting set method with reformulation-perspectification techniques and branch and bound. We show that our approach also applies to robust convex optimization, which is a sub-class of problems involving sum of linear times concave functions in the uncertain parameters, thereby extending the existing literature. Numerical experiments on a robust convex geometric optimization problem, a robust quadratic optimization problem with data uncertainty and implementation error, and a lot sizing problem in which we impose a quadratic decision rule, demonstrate the effectiveness of the proposed approach. 

Abstract:

Column-and-constraint-generation (CCG) algorithms are among the most popular approaches for solving adjustable robust problems. In the recent years, they have been successfully applied to a variety of problems, most of which have continuous wait-and-see decisions. While it is typically acknowledged that CCG can scale relatively well for problems with continuous wait-and-see decisions, the same cannot be claimed for problems in which (mixed-)integer decisions must be taken after the uncertainty is revealed. In this talk, we computationally evaluate the performance of CCG algorithms when applied to problems with mixed-integer second-stage decisions and discuss its main weaknesses. Then, we show how column-generation techniques can be employed to alleviate these burdens and successfully reduce the computational time needed to prove optimality. Additionally, we also introduce a new heuristic method, which is based on the column-generation procedure to quickly find feasible solutions at each step of the CCG. The scheme can also be enhanced with domain-specific knowledge on the deterministic version of the problem being solved and can exploit parallelism. Finally, we also highlight that the method can, at each iteration, be warm-started by knowledge gained at previous iterations of the CCG. This is in contrast to standard CCG algorithms where each iteration solves a new problem from scratch. The computational benefits of the proposed scheme is then evaluated on two optimization problems arising in logistics and scheduling. 

Bio:

Danique is a PhD-candidate at the University of Amsterdam. Her research focuses on finding new methods to globally solve hard minimization problems with concave parts in the objective and/or constraint functions. These methods can also be used to solve robust nonlinear optimization problems exactly.

Bio:

Henri is currently a post-doc researcher at Universität Trier (Germany). He received a Master degree in "Optimization and learning of systems" from the Université de Technologie de Compiègne (France) in 2019 as well as an Engineering degree in "Production systems" from the Università degli studi di Genova (Italy) the same year. He earned his Ph.D. at the Università di Bologna (Italy) under the supervision of Michele Monaci and Enrico Malaguti where he studied adjustable robust optimization problems with nonlinear recourses. His main research interests are in optimization under uncertainty and decomposition methods for large-scale applications. In particular, his work focuses on exact methods for two-stage robust problems with mixed-integer convex wait-and-see decisions using decomposition approaches.

October 20, 2023

15:00 (CET)

Erasmus University Rotterdam 


Title: Two-Stage Robust Quadratic Optimization with Equalities and its Application to Optimal Power Flow 

Abstract: 

In this work, we consider two-stage quadratic optimization problems under ellipsoidal uncertainty. In the first stage, one needs to decide upon the values of a subset of optimization variables (control variables). In the second stage, the uncertainty is revealed, and the rest of optimization variables (state variables) are set up as a solution to a known system of possibly non-linear equations. This type of problem occurs, for instance, in optimization for dynamical systems, such as electric power systems as well as gas and water networks. We propose a convergent iterative algorithm to build a sequence of approximately robustly feasible solutions. At each iteration, the algorithm optimizes over a subset of the feasible set and uses affine approximations of the second-stage equations while preserving the non-linearity of other constraints. We implement our approach for AC Optimal Power Flow and demonstrate the performance of our proposed method on Matpower instances. This work focuses on quadratic problems, but the approach is suitable for more general setups.

 

This topic is based on the following working paper: https://arxiv.org/abs/2104.03107. This is joint work with Bissan Ghaddar from Ivery Business School and Daniel Molzahn from Georgia Tech.




Bio:

Olga is an assistant professor of optimization at the Econometric Institute of Erasmus University Rotterdam. She develops solution approaches and approximation algorithms for non-linear optimization problems using convex programming. Typical applications for her research are network-based problems, such as optimization of energy, water, and transport systems.  

October 6, 2023

15:00 (CET)

National University of Singapore 


Title: The Occam's Razor of Robustness 

Abstract: 

Robust optimization has become an indispensable tool in tackling uncertainties in optimization. The budgeted uncertainty set introduced by Bertsimas and Sim (2004) is a prevalent approach in this field due to its effectiveness in addressing a host of practical linear optimization problems under uncertainty, though challenges remain in multi-period decision models. In this work, we revisit this approach with a distinct angle: bypassing traditional support boundaries and introducing a robust satisficing model where preferences are conveyed through targets. At the core of our discussion is the cardinal fragility criterion, which evaluates a solution’s vulnerability to input deviations in reaching a target. Our analysis employs a prediction disparity metric rooted in the L1-norm, echoing Bertsimas and Sim (2004), but diverges with its boundary-free support. This fragility construct seamlessly integrates into a multi-attribute robust satisficing model, prioritizing alignment with predetermined attribute targets. By eliminating support constraints, we extend the boundaries of our computational reach. Our approach proficiently handles multiperiod models by employing direct and straightforward formulations, without introducing dual variables or resorting to approximations. Additionally, we present a method to optimize the parameters of the prediction disparity metric using prediction residuals, ensuring the desired confidence profile across various target attainment levels. We conducted two studies: one focusing on portfolio optimization, and the other using simulation to explore if heightened crashing costs can enhance the likelihood of on-time project completion.



Bio:

Dr. Melvyn Sim is Professor and Provost's Chair at the Department of Analytics & Operations, NUS Business school. His research interests fall broadly under the categories of decision making and optimization under uncertainty with applications ranging from finance, supply chain management, healthcare to engineered systems. He is one of the active proponents of Robust Optimization and Satisficing, and has given keynote seminars at international conferences.  Dr. Sim is currently serving as a Department Editor of MSOM, and as an associate editor for Operations Research, Management Science and INFORMS Journal on Optimization.

September 22, 2023

15:00 (CET)

Trier University


Title:The Connections Between Bilevel and Robust Optimization 

Abstract: 

Robust optimization is a prominent approach in optimization to deal

with uncertainties in the data of the optimization problem by hedging

against the worst-case realization of the uncertain event. Doing this

usually leads to a multilevel structure of the mathematical

formulation that is very similar to what we are used to consider in

bilevel optimization. Hence, these two fields are closely related

but the study of their combination is still in its infancy.

In this talk, we analyze the connections between different types of

robust problems (strictly robust problems with and without

decision-dependence of their uncertainty sets, min-max-regret

problems, and two-stage robust problems) as well as of bilevel problems

(optimistic problems, pessimistic problems, and robust bilevel

problems).

We hope that these results pave the way for a stronger connection

between the two fields---in particular to use both theory and

algorithms from one field in the other and vice versa.


Bio:

Short CV

Since 1/2019: Full professor (W3) for “Nonlinear Optimization” at Trier University

4/2014-9/2018: Junior professor (W1) for “Optimization of Energy Systems” at the Friedrich-Alexander-Universität Erlangen-Nürnberg

1/2013: PhD in mathematics at the Leibniz Universität Hannover

10/2008: Diplom in mathematics (minor computer science) at the Leibniz Universität Hannover

Editorial Work

 Since June 2022: Editorial Board member of the Journal of Optimization Theory and Applications

Since May 2022: Editorial Board member of Optimization Letters

Since 2021: Associate Editor of OR Spectrum

 Since 2021: Associate Editor of the EURO Journal on Computational Optimization

Since 2011: Technical Editor of Mathematical Programming Computation 

Awards

Optimization Letters best paper award for the year 2021 

Marguerite Frank Award for the best EJCO paper in 2021

MMOR Best Paper Award 2021

1st price (category: scientific) of the ROADEF/EURO Challenge 2020

Howard Rosenbrock Prize 2020

EURO Excellence in Practice Award 2016

September 8, 2023

15:00 (CET)

Dick den Hertog and Thodoris Koukouvinos 

University of Amsterdam and Massachusetts Institute of Technology


Title: A novel algorithm for a broad class of nonconvex optimization problems  -  and how this can be used in Robust Optimization

Abstract: 

We propose a new global optimization approach for solving nonconvex optimization problems in which the nonconvex components are sums of products of convex functions. A broad class of nonconvex problems can be written in this way, such as concave minimization problems, difference of convex problems, and fractional optimization problems. Our approach exploits two techniques: first, we introduce a new technique, called the Reformulation-Perspectification Technique (RPT), to obtain a convex approximation of the considered nonconvex continuous optimization problem. Next, we develop a spatial Branch and Bound scheme, leveraging RPT, to obtain a global optimal solution. Numerical experiments on four different convex maximization problems, a quadratic constrained quadratic optimization problem, and a dike height optimization problem demonstrate the effectiveness of the proposed approach. In particular, our approach solves more instances to global optimality for the considered problems than BARON and SCIP. Moreover, for problem instances of larger dimension our approach outperforms both BARON and SCIP on computation time for most problem instances, while for smaller dimension BARON overall performs better on computation time.  Finding the worst-case scenario in Robust Optimization is often a nonconvex optimization problem. We therefore discuss how this novel algorithm can be used to solve robust and adaptive nonlinear optimization problems.

Bio:

Dick den Hertog is a professor of Operations Research at the University of Amsterdam. His research interests cover various fields in prescriptive analytics, in particular linear and nonlinear optimization. In recent years his main focus has been on robust optimization, and recently he started research on Optimization with Machine Learning. He is also active in applying the theory in real-life applications. In particular, he is interested in applications that contribute to a better society. He received the INFORMS Franz Edelman Award twice: in 2013 for his research on optimal flood protection, and in 2021 for his research on optimizing the food supply chain for the UN World Food Programme. Currently, he is doing research to develop better optimization models and techniques for cancer treatment together with researchers from Harvard Medical School and Massachusetts General Hospital (Boston, USA), and he is involved in research to optimize the locations of healthcare facilities, e.g., in Timor-Leste and Vietnam, together with the World Bank. He has been Visiting Professor at MIT for several years now. He is  Science-to-Impact Director of the Analytics for a Better World Institute, which he co-founded in 2022.

Bio:

Thodoris Koukouvinos is a fourth year PhD student at MIT, Operations Research Center, advised by Prof. Dimitris Bertsimas. Before joining MIT, he studied Applied Mathematics and Physics in the National Technical University of Athens, in Greece. His research interests include ethics in machine learning, robust optimization and non convex optimization.

June 30, 2023

15:00 (CET)

University of Bremen


Title: Recoverable Robust Optimization with Commitment


University of Graz


Title: On the Complexity of Robust Multi-Stage Problems in the Polynomial Hierarchy 


Abstract:

We propose a model for recoverable robust optimization with commitment. Given a combinatorial optimization problem and uncertainty about elements that may fail, we ask for a robust solution that, after the failing elements are revealed, can be augmented in a limited way. However, we commit to preserve the remaining elements of the initial solution. We consider different polynomial-time solvable combinatorial optimization problems and settle the computational complexity of their robust counterparts with commitment. We show for the weighted matroid basis problem that an optimal solution to the nominal problem is also optimal for its robust counterpart. Indeed, matroids are provably the only structures with this strong property. Robust counterparts of other problems are NP-hard such as the matching and the stable set problem, even in bipartite graphs. However, we establish polynomial-time algorithms for the robust counterparts of the unweighted stable set problem in bipartite graphs and the weighted stable set problem in interval graphs, also known as the interval scheduling problem.
Joint work with Nicole Megow, Komal Muluk, and Britta Peis.

Abstract:

We study the computational complexity of multi-stage robust optimization problems. Such problems are formulated with alternating min/max quantifiers and therefore naturally fall into a higher stage of the polynomial hierarchy. Despite this, almost no hardness results with respect to the polynomial hierarchy are known.
In this work, we examine the hardness of robust two-stage adjustable and robust recoverable optimization with budgeted uncertainty sets. Our main technical contribution is the introduction of a technique tailored to prove Σp3-hardness of such problems. We highlight a difference between continuous and discrete budgeted uncertainty: In the discrete case, indeed a wide range of problems becomes complete for the third stage of the polynomial hierarchy; in particular, this applies to the TSP, independent set, and vertex cover problems. However, in the continuous case this does not happen and problems remain in the first stage of the hierarchy. Finally, if we allow the uncertainty to not only affect the objective, but also multiple constraints, then this distinction disappears and even in the continuous case we encounter hardness for the third stage of the hierarchy. This shows that even robust problems which are already NP-complete can still exhibit a significant computational difference between column-wise and row-wise uncertainty.

 

Joint work with Marc Goerigk and Lasse Wulf

Bio:

Felix Hommelsheim is a postdoc at the group of Prof. Dr. Nicole Megow at the University of Bremen. He received his PhD at TU Dortmund University, where he was supervised by Prof. Dr. Christoph Buchheim. Felix' research focuses on the design of efficient (approximation) algorithms for combinatorial optimization problems under uncertainty. In particular he is interested in algorithms for survivable network design problems.

Bio:

Stefan Lendl is COO and co-founder of s2 data & algorithms and postdoctoral researcher at University of Graz. He studied mathematics at Graz University of Technology and obtained his PhD under the supervision of Bettina Klinz in 2019. His research interests are both in applications and the theory of robust optimization. His theoretical interest is in obtaining a deeper understanding of the computational complexity of robust optimization problems, using techniques like the polynomial hierarchy and parameterized complexity. His start-up s2 data & algorithms develops MasterScheduler, a software solution applying robust optimization to holistic supply chain optimization.

June 16, 2023

15:00 (CET)

Cornell University


Title: Robust Facility Location Problems 

Abstract: 

The facility location problem is a well-known and extensively studied problem in the field of Operations Research, attracting significant attention from researchers and practitioners. In this talk, we specifically address robust facility location problems, which provide a valuable approach to tackle the complexities arising from uncertain demand. We consider a two-stage robust facility location problem on a metric under uncertain demand. The decision-maker needs to decide on the (integral) units of supply for each facility in the first stage to satisfy an uncertain second-stage demand, such that the sum of the first-stage supply cost and the worst-case cost of satisfying the second-stage demand over all scenarios is minimized. The second-stage decisions are only assignment decisions without the possibility of adding recourse supply capacity. This makes our model different from existing work on two-stage robust facility location and set covering problems. We consider an implicit model of uncertainty with an exponential number of demand scenarios specified by an upper bound k on the number of second-stage clients. In an optimal solution, the second-stage assignment decisions depend on the scenario; surprisingly, we show that restricting to a fixed (static) fractional assignment for each potential client irrespective of the scenario gives us an O(log k/log log k)-approximation for the problem. Moreover, the best such static assignment can be computed efficiently, giving us the desired guarantee. Our proposed static assignment policies showcase their effectiveness in achieving strong approximate solutions both theoretically and numerically. Moreover, these policies offer a promising approach to address uncertain demand in diverse facility location scenarios. This is joint work with Vineet Goyal and David Shmoys.


Bio:

Omar El Housni is an assistant professor at Cornell Tech and in the School of Operations Research and Information Engineering at Cornell University. He is also a Field Member of the Center of Applied Mathematics at Cornell. Omar received a PhD in Operations Research from Columbia University and a BS and MS in Applied Mathematics from Ecole Polytechnique In Paris.  Omar’s research focuses on the design of robust and efficient algorithms for sequential dynamic optimization problems under uncertainty with applications in revenue management, assortment optimization, facility location problems and online matching markets.


May 26, 2023

15:00 (CET)

The University of Texas at Austin 


Title: Contextual Reinforcement Learning when we don't know the contexts 

Abstract: 

Contextual Bandits and more generally, contextual reinforcement learning, studies the problem where the learner relies on revealed contexts, or labels, to adapt learning and optimization strategies. What can we do when those contexts are missing? 

 

Statistical learning with missing or hidden information is ubiquitous in many theoretical and applied problems. A basic yet fundamental setting is that of mixtures, where each data point is generated by one of several possible (unknown) processes. In this talk, we are interested in the dynamic decision-making version of this problem. At the beginning of each (finite length, typically short) episode, we interact with an MDP drawn from a set of M possible MDPs. The identity of the MDP for each episode -- the context -- is unknown.

 

We review the basic setting of MDPs and Reinforcement Learning, and explain in that framework why this class of problems is both important and challenging. Then, we outline several of our recent results in this area, as time permits. We first show that without additional assumptions, the problem is statistically hard in the number of different Markov chains: finding an epsilon-optimal policy requires exponentially (in M) many episodes. We then study several special and natural classes of LMDPs. We show how ideas from the method-of-moments, in addition to the principle of optimism, can be applied here to derive new, sample efficient RL algorithms in the presence of latent contexts.

Bio:

Constantine Caramanis is a Professor in the ECE department of The University of Texas at Austin. He received a PhD in EECS from The Massachusetts Institute of Technology, in the Laboratory for Information and Decision Systems (LIDS), and an AB in Mathematics from Harvard University. His current research interests focus on decision-making in large-scale complex systems, with a focus on statistical learning and optimization.

May 19, 2023

15:00 (CET)

Friedrich-Alexander-Universität Erlangen-Nürnberg 


Title: (Data-driven) distributional robustness over time: How to 'learn' relevant uncertainties together with robust decisions?


Abstract: 

In many applications, determining optimized solutions that are hedged against uncertainties is mandatory. Classical stochastic optimization approaches, however, may be prone to the disadvantage that the underlying probability distributions are unknown or uncertain themselves. On the other hand, standard robust optimization may lead to conservative results as it generates solutions that are feasible regardless of how uncertainties manifest themselves within predefined uncertainty sets. Distributional robustness (DRO) lies at the interface of robust and stochastic optimization as it robustly protects against uncertain distributions. DRO currently receives increased attention and is considered as an alternative that can lead to less conservative but still uncertainty-protected solutions. In this talk, we will review some approaches in the area of distributional robustness. We will explain some recent developments that use scenario observations to learn more about the uncertain distributions over time, together with best possible protected solutions. The goal is to incorporate new information when it arrives and to improve our solution without resolving the entire robust counterpart. We also present computational results showing that our approach based on online learning leads to a practically efficient solution algorithm for DRO over time. This is joint with K. Aigner, A. Bärmann, K. Braun, F. Liers, S. Pokutta, O. Schneider, K. Sharma, and S. Tschuppik (INFORMS Journal on Optimimization (2023)).

Bio:

Frauke studied mathematics at the University of Cologne and obtained her doctoral degree in computer science in 2004. After research stays in Rome and in Paris, she headed a DFG-funded Emmy Noether Junior Research Group in Cologne. In 2010, she obtained her Habilitation and became professor in Applied Mathematics in the Department of Mathematics at the Friedrich-Alexander Universität Erlangen-Nürnberg in 2012. Her research is in optimization under uncertainty, including combinatorial, mixed-integer and nonlinear aspects. Integrating knowledge from data analysis is of high interest. She has coordinated the European Graduate College MINOA (Marie-Curie ITN, 2018-2021) and is Principal Investigator in the Collaborative Reseach Centers CRC 1411 and CRC/TRR 154. Since 04/2023, she is spokesperson of the CTC/TRR 154.

May 5, 2023

15:00 (CET)

University of Passau


Title: Scenario Reduction for Robust Optimization with Discrete Uncertainty 

Abstract: 

Robust optimization typically follows a worst-case perspective, where a single scenario may determine the objective value of a given solution. Accordingly, it is a challenging task to reduce the size of an uncertainty set without changing the resulting objective value too much. On the other hand, robust optimization problems with many scenarios tend to be hard to solve, in particular for two-stage problems. Hence, a reduced uncertainty set may be central to find solutions in reasonable time. We propose scenario reduction methods that give guarantees on the performance of the resulting robust solution. Scenario reduction problems for one- and two-stage robust optimization are framed as optimization problems that only depend on the uncertainty set and not on the underlying decision making problem. Experimental results indicate that objective values for the reduced uncertainty sets are closely correlated to original objective values, resulting in better solutions than when using general-purpose clustering methods such as K-means.

 

Authors: Marc Goerigk and Mohammad Khosravi

Bio:

Marc Goerigk is Professor for Business Decisions and Data Science at the University of Passau. His research focuses mainly on robust decision making for  combinatorial problems, in particular regarding aspects of complexity and making the best use of data. Marc holds a PhD in mathematics from the University of Göttingen. Before joining his colleagues in Passau, he was Professor at the University of Siegen, lecturer at Lancaster University and post-doctoral researcher at the University of Kaiserslautern.

April 21, 2023

17:00 (CET)

University of Southern California


Title: Learning Optimal Classification Trees Robust to Distribution Shifts


Abstract: 

We consider the problem of learning classification trees that are robust to distribution shifts between training and testing/deployment data. This problem arises frequently in high stakes settings such as public health and social work where data is often collected using self-reported surveys which are highly sensitive to e.g., the framing of the questions, the time when and place where the survey is conducted, and the level of comfort the interviewee has in sharing information with the interviewer. We propose a method for learning optimal robust classification trees based on mixed-integer robust optimization technology. In particular, we demonstrate that the problem of learning an optimal robust tree can be cast as a single-stage mixed-integer robust optimization problem with a highly nonlinear and discontinuous objective. We reformulate this problem equivalently as a two-stage linear robust optimization problem for which we devise a tailored solution procedure based on constraint generation. We evaluate the performance of our approach on numerous publicly available datasets, and compare the performance to a regularized, non-robust optimal tree. We show an increase of up to 14.16% in worst-case accuracy and of up to 4.72% in average-case accuracy across several datasets and distribution shifts from using our robust solution in comparison to the non-robust one.

Bio:

Phebe Vayanos is an Associate Professor of Industrial & Systems Engineering and Computer Science at the University of Southern California. She is also an Associate Director of CAIS, the Center for Artificial Intelligence in Society at USC. Her research is focused on Operations Research and Artificial Intelligence and in particular on optimization and machine learning. Her work is motivated by problems that are important for social good, such as those arising in public housing allocation, public health, and biodiversity conservation. Prior to joining USC, she was lecturer in the Operations Research and Statistics Group at the MIT Sloan School of Management, and a postdoctoral research associate in the Operations Research Center at MIT. She holds a PhD degree in Operations Research and an MEng degree in Electrical & Electronic Engineering, both from Imperial College London. She is a recipient of the NSF CAREER award and the INFORMS Diversity, Equity, and Inclusion Ambassador Program Award. 

March 24, 2023

15:00 (CET)

University of Amsterdam

Title: Finding regions of counterfactual explanations via robust optimization

Imperial College London

Title: Differential Privacy via Distributionally Robust Optimization 

Abstract:

Counterfactual explanations (CEs) are critical for detecting bias and improving the explainability of data-driven classification models. A counterfactual explanation is a minimal perturbed data point for which the decision of the model changes. However, most of the existing methods can only provide one CE, which may not be feasible for the user. In this talk, it will be presented an iterative method that calculates robust CEs. These are CEs that remain valid even after the features are slightly perturbed. The method provides a whole region of CEs, allowing the user to choose a suitable recourse to obtain a desired outcome. It will be discussed how to use algorithmic ideas from robust optimization and prove convergence results for the most common machine learning methods, including logistic regression, decision trees, random forests, and neural networks. 

Abstract:

In recent years, differential privacy has emerged as the de facto standard for sharing statistics of datasets while limiting the disclosure of private information about the involved individuals. This is achieved by randomly perturbing the statistics to be published, which in turn leads to a privacy-accuracy trade-off: larger perturbations provide stronger privacy guarantees, but they result in less accurate statistics that offer lower utility to the recipients. Of particular interest are therefore optimal mechanisms that provide the highest accuracy for a pre-selected level of privacy. To date, work in this area has focused on specifying families of perturbations a priori and proving their asymptotic and/or best-in-class optimality.


In this paper, we develop a class of mechanisms that enjoy non-asymptotic and unconditional optimality guarantees. To this end, we formulate the mechanism design problem as an infinite-dimensional distributionally robust optimization problem. We show that this problem affords a strong dual, and we develop converging hierarchies of finite-dimensional upper and lower bounding problems. Our upper bounds correspond to implementable perturbations whose suboptimality can be bounded by our lower bounds. Both bounding problems can be solved within seconds via cutting plane techniques that exploit the inherent problem structure. Our numerical experiments demonstrate that our perturbations can outperform the previously best results from the literature on artificial as well as benchmark problems.

Co-authors: Huikang Liu & Wolfram Wiesemann 

Bio:

Donato Maragno is a PhD candidate at the Department of Business Analytics, University of Amsterdam in the Netherlands. His research interest centers around exploring the integration of machine learning and optimization techniques. He has contributed to the development of OptiCL, an open-source tool that enables optimization with constraint learning.  

Bio:

Aras Selvi is a PhD candidate in Operations Research at Imperial College Business School, where he is supervised by Wolfram Wiesemann as a member of the Models and Algorithms for Decision-Making under Uncertainty research group. His research interests are at the intersection of robust optimization, machine learning, and computational privacy. He is currently a fellow of The Alan Turing Institute as a recipient of the PhD Enrichment Scheme placement program. Aras is also affiliated with the Computational Optimization Group and the Data Science Institute of Imperial College London. 

March 10, 2023

15:00 (CET)

Technion - Israel Institute of Technology

Title: An Algorithm for Maximizing a Convex Function Based on its Minimum

Abstract:

In this talk an algorithm for maximizing a convex function over a convex feasible set (a common problem in Robust Analysis) is proposed. The algorithm consists of two phases: in phase 1 a feasible starting point is obtained that is used in a Gradient Descent algorithm in phase 2. The main contribution of the paper is connected to phase 1; five different methods are used to approximate the original NP-hard problem of maximizing a convex function (MCF) by a tractable convex optimization problem. All the methods use the minimizer of the convex objective function in their construction. The performance of the overall algorithm is tested on a wide variety of MCF problems, demonstrating its efficiency.

Bio:

Aharon Ben-Tal is a Professor of Operations Research at the Technion – Israel Institute of Technology.  He received his Ph.D. in Applied Mathematics from Northwestern University in 1973.  He has been a Visiting Professor at the University of Michigan, University of Copenhagen, Delft University of Technology, MIT and CWI Amsterdam, Columbia and NYU.  His interests are in Continuous Optimization, particularly nonsmooth and large-scale problems, conic and robust optimization, as well as convex and nonsmooth analysis. In recent years the focus of his research is on optimization problems affected by uncertainty; he is one if the founders of the Robust Optimization methodology to address these problems. In the last 20 years he has devoted much effort to engineering applications of optimization methodology and computational schemes. Some of the algorithms developed in the MINERVA Optimization Center are in use by Industry (Medical Imaging, Aerospace). He has published more than 135 papers in professional journals and co-authored three books.   Prof. Ben-Tal was Dean of the Faculty of Industrial Engineering and Management at the Technion (1989-1992) and (2011-2014). He served in the editorial board of all major OR/Optimization journals. He gave numerous plenary and keynote lectures in international conferences.

 

In 2007 Professor Ben-Tal was awarded the EURO Gold Medal - the highest distinction of Operations Research within Europe.

 

In 2009 he was named Fellow of INFORMS.

 

In 2015 he was named Fellow of SIAM.

 

In 2016 he was awarded by INFORMS the Khachiyan Prize for Lifetime Achievement in the area of Optimization.

 

In 2017, the Operation Research Society of Israel (ORSIS) awarded him the Lifetime Achievement Prize.

 

As of February 2023 his work has over 33,000 citations (Google scholar).


February 24, 2023 

15:00 (CET)

Massachusetts Institute of Technology

Title: Robust Optimization with Information Theory Inspired Uncertainty Sets and its Applications 

Abstract

Classical approaches in robust optimization (RO) involve out-of-the-box uncertainty sets such as norm, budget or polyhedral sets. Apart from leveraging available data to cross-validate hyperparameters, out-of-the-box uncertainty sets seldom incorporate distributional information, so they inherently do not capture the real-world behavior of uncertain parameters.

Distributionally Robust Optimization (DRO), by contrast, satisfies constraints in expectation among a class of underlying distributions.  We propose a new type of uncertainty set motivated by information theory, which incorporates the probability distributions directly. Initial computational tests with our approach yield reductions in the maximum and mean violation as compared to classical RO and DRO. We apply this novel uncertainty set in a real-world setting of the Hartford Hospital's Bone and Joint Institute to optimize the patient census while considering the uncertainty in the rest times of patients after surgery. We generate a timetable for surgeons that reduces the monthly census by up to 10%, outperforming both out-of-the-box uncertainty sets and DRO. (Joint work with Benjamin Boucher, MIT).



Bio

Dimitris Bertsimas is the Boeing Professor of Operations Research and the Associate Dean of Business Analytics at the Massachusetts Institute of Technology. He is a member of the US National Academy of Engineering, an INFORMS fellow, recipient of the John von Neumann Theory Prize, the Frederick W. Lanchester Prize, the Erlang Prize, finalist of the Franz Edelman Prize four times, and the INFORMS President’s Award, among many other research and teaching awards, supervisor of 88 completed and 25 current doctoral theses, editor of the INFORMS Journal on Optimization, co-author of seven textbooks and co-founder of ten analytics companies and two foundations. 


February 10, 2023 

15:00 (CET)

Imperial College London

Title: Randomized Assortment Optimization 

Abstract

When a firm selects an assortment of products to offer to customers, it uses a choice model to anticipate their probability of purchasing each product. In practice, the estimation of these models is subject to statistical errors, which may lead to significantly suboptimal assortment decisions. Recent work has addressed this issue using robust optimization, where the true parameter values are assumed unknown and the firm chooses an assortment that maximizes its worst-case expected revenues over an uncertainty set of likely parameter values, thus mitigating estimation errors. In this talk, we introduce the concept of randomization into the robust assortment optimization literature. We show that the standard approach of deterministically selecting a single assortment to offer is not always optimal in the robust assortment optimization problem. Instead, the firm can improve its worst-case expected revenues by selecting an assortment randomly according to a prudently designed probability distribution. We demonstrate this potential benefit of randomization both theoretically in an abstract problem formulation as well as empirically across three popular choice models: the multinomial logit model, the Markov chain model, and the preference ranking model. We show how an optimal randomization strategy can be determined exactly and heuristically. Besides the superior in-sample performance of randomized assortments, we demonstrate improved out-of-sample performance in a data- driven setting that combines estimation with optimization. Our results suggest that more general versions of the assortment optimization problem—incorporating business constraints, more flexible choice models and/or more general uncertainty sets—tend to be more receptive to the benefits of randomization.


Bio

Wolfram Wiesemann is Professor of Analytics & Operations as well as the head of the Analytics, Marketing & Operations department at Imperial College Business School. His research interests evolve around decision-making under uncertainty, with applications to supply chain management, healthcare and energy. Wolfram is an elected member of the boards of the Mathematical Optimization Society and the Stochastic Programming Society, and he serves as an area editor for Operations Research Letters as well as an associate editor for Mathematical Programming, Operations Research, Manufacturing & Service Operations Management and SIAM Journal on Optimization.