Season 1: Feb 2023 - June 2023

June 30, 2023

15:00 (CET)

University of Bremen


Title: Recoverable Robust Optimization with Commitment


University of Graz


Title: On the Complexity of Robust Multi-Stage Problems in the Polynomial Hierarchy 


Abstract:

We propose a model for recoverable robust optimization with commitment. Given a combinatorial optimization problem and uncertainty about elements that may fail, we ask for a robust solution that, after the failing elements are revealed, can be augmented in a limited way. However, we commit to preserve the remaining elements of the initial solution. We consider different polynomial-time solvable combinatorial optimization problems and settle the computational complexity of their robust counterparts with commitment. We show for the weighted matroid basis problem that an optimal solution to the nominal problem is also optimal for its robust counterpart. Indeed, matroids are provably the only structures with this strong property. Robust counterparts of other problems are NP-hard such as the matching and the stable set problem, even in bipartite graphs. However, we establish polynomial-time algorithms for the robust counterparts of the unweighted stable set problem in bipartite graphs and the weighted stable set problem in interval graphs, also known as the interval scheduling problem.
Joint work with Nicole Megow, Komal Muluk, and Britta Peis.

Abstract:

We study the computational complexity of multi-stage robust optimization problems. Such problems are formulated with alternating min/max quantifiers and therefore naturally fall into a higher stage of the polynomial hierarchy. Despite this, almost no hardness results with respect to the polynomial hierarchy are known.
In this work, we examine the hardness of robust two-stage adjustable and robust recoverable optimization with budgeted uncertainty sets. Our main technical contribution is the introduction of a technique tailored to prove Σp3-hardness of such problems. We highlight a difference between continuous and discrete budgeted uncertainty: In the discrete case, indeed a wide range of problems becomes complete for the third stage of the polynomial hierarchy; in particular, this applies to the TSP, independent set, and vertex cover problems. However, in the continuous case this does not happen and problems remain in the first stage of the hierarchy. Finally, if we allow the uncertainty to not only affect the objective, but also multiple constraints, then this distinction disappears and even in the continuous case we encounter hardness for the third stage of the hierarchy. This shows that even robust problems which are already NP-complete can still exhibit a significant computational difference between column-wise and row-wise uncertainty.

 

Joint work with Marc Goerigk and Lasse Wulf

Bio:

Felix Hommelsheim is a postdoc at the group of Prof. Dr. Nicole Megow at the University of Bremen. He received his PhD at TU Dortmund University, where he was supervised by Prof. Dr. Christoph Buchheim. Felix' research focuses on the design of efficient (approximation) algorithms for combinatorial optimization problems under uncertainty. In particular he is interested in algorithms for survivable network design problems.

Bio:

Stefan Lendl is COO and co-founder of s2 data & algorithms and postdoctoral researcher at University of Graz. He studied mathematics at Graz University of Technology and obtained his PhD under the supervision of Bettina Klinz in 2019. His research interests are both in applications and the theory of robust optimization. His theoretical interest is in obtaining a deeper understanding of the computational complexity of robust optimization problems, using techniques like the polynomial hierarchy and parameterized complexity. His start-up s2 data & algorithms develops MasterScheduler, a software solution applying robust optimization to holistic supply chain optimization.

June 16, 2023

15:00 (CET)

Cornell University


Title: Robust Facility Location Problems 

Abstract: 

The facility location problem is a well-known and extensively studied problem in the field of Operations Research, attracting significant attention from researchers and practitioners. In this talk, we specifically address robust facility location problems, which provide a valuable approach to tackle the complexities arising from uncertain demand. We consider a two-stage robust facility location problem on a metric under uncertain demand. The decision-maker needs to decide on the (integral) units of supply for each facility in the first stage to satisfy an uncertain second-stage demand, such that the sum of the first-stage supply cost and the worst-case cost of satisfying the second-stage demand over all scenarios is minimized. The second-stage decisions are only assignment decisions without the possibility of adding recourse supply capacity. This makes our model different from existing work on two-stage robust facility location and set covering problems. We consider an implicit model of uncertainty with an exponential number of demand scenarios specified by an upper bound k on the number of second-stage clients. In an optimal solution, the second-stage assignment decisions depend on the scenario; surprisingly, we show that restricting to a fixed (static) fractional assignment for each potential client irrespective of the scenario gives us an O(log k/log log k)-approximation for the problem. Moreover, the best such static assignment can be computed efficiently, giving us the desired guarantee. Our proposed static assignment policies showcase their effectiveness in achieving strong approximate solutions both theoretically and numerically. Moreover, these policies offer a promising approach to address uncertain demand in diverse facility location scenarios. This is joint work with Vineet Goyal and David Shmoys.


Bio:

Omar El Housni is an assistant professor at Cornell Tech and in the School of Operations Research and Information Engineering at Cornell University. He is also a Field Member of the Center of Applied Mathematics at Cornell. Omar received a PhD in Operations Research from Columbia University and a BS and MS in Applied Mathematics from Ecole Polytechnique In Paris.  Omar’s research focuses on the design of robust and efficient algorithms for sequential dynamic optimization problems under uncertainty with applications in revenue management, assortment optimization, facility location problems and online matching markets.


May 26, 2023

15:00 (CET)

The University of Texas at Austin 


Title: Contextual Reinforcement Learning when we don't know the contexts 

Abstract: 

Contextual Bandits and more generally, contextual reinforcement learning, studies the problem where the learner relies on revealed contexts, or labels, to adapt learning and optimization strategies. What can we do when those contexts are missing? 

 

Statistical learning with missing or hidden information is ubiquitous in many theoretical and applied problems. A basic yet fundamental setting is that of mixtures, where each data point is generated by one of several possible (unknown) processes. In this talk, we are interested in the dynamic decision-making version of this problem. At the beginning of each (finite length, typically short) episode, we interact with an MDP drawn from a set of M possible MDPs. The identity of the MDP for each episode -- the context -- is unknown.

 

We review the basic setting of MDPs and Reinforcement Learning, and explain in that framework why this class of problems is both important and challenging. Then, we outline several of our recent results in this area, as time permits. We first show that without additional assumptions, the problem is statistically hard in the number of different Markov chains: finding an epsilon-optimal policy requires exponentially (in M) many episodes. We then study several special and natural classes of LMDPs. We show how ideas from the method-of-moments, in addition to the principle of optimism, can be applied here to derive new, sample efficient RL algorithms in the presence of latent contexts.

Bio:

Constantine Caramanis is a Professor in the ECE department of The University of Texas at Austin. He received a PhD in EECS from The Massachusetts Institute of Technology, in the Laboratory for Information and Decision Systems (LIDS), and an AB in Mathematics from Harvard University. His current research interests focus on decision-making in large-scale complex systems, with a focus on statistical learning and optimization.

May 19, 2023

15:00 (CET)

Friedrich-Alexander-Universität Erlangen-Nürnberg 


Title: (Data-driven) distributional robustness over time: How to 'learn' relevant uncertainties together with robust decisions?


Abstract: 

In many applications, determining optimized solutions that are hedged against uncertainties is mandatory. Classical stochastic optimization approaches, however, may be prone to the disadvantage that the underlying probability distributions are unknown or uncertain themselves. On the other hand, standard robust optimization may lead to conservative results as it generates solutions that are feasible regardless of how uncertainties manifest themselves within predefined uncertainty sets. Distributional robustness (DRO) lies at the interface of robust and stochastic optimization as it robustly protects against uncertain distributions. DRO currently receives increased attention and is considered as an alternative that can lead to less conservative but still uncertainty-protected solutions. In this talk, we will review some approaches in the area of distributional robustness. We will explain some recent developments that use scenario observations to learn more about the uncertain distributions over time, together with best possible protected solutions. The goal is to incorporate new information when it arrives and to improve our solution without resolving the entire robust counterpart. We also present computational results showing that our approach based on online learning leads to a practically efficient solution algorithm for DRO over time. This is joint with K. Aigner, A. Bärmann, K. Braun, F. Liers, S. Pokutta, O. Schneider, K. Sharma, and S. Tschuppik (INFORMS Journal on Optimimization (2023)).

Bio:

Frauke studied mathematics at the University of Cologne and obtained her doctoral degree in computer science in 2004. After research stays in Rome and in Paris, she headed a DFG-funded Emmy Noether Junior Research Group in Cologne. In 2010, she obtained her Habilitation and became professor in Applied Mathematics in the Department of Mathematics at the Friedrich-Alexander Universität Erlangen-Nürnberg in 2012. Her research is in optimization under uncertainty, including combinatorial, mixed-integer and nonlinear aspects. Integrating knowledge from data analysis is of high interest. She has coordinated the European Graduate College MINOA (Marie-Curie ITN, 2018-2021) and is Principal Investigator in the Collaborative Reseach Centers CRC 1411 and CRC/TRR 154. Since 04/2023, she is spokesperson of the CTC/TRR 154.

May 5, 2023

15:00 (CET)

University of Passau


Title: Scenario Reduction for Robust Optimization with Discrete Uncertainty 

Abstract: 

Robust optimization typically follows a worst-case perspective, where a single scenario may determine the objective value of a given solution. Accordingly, it is a challenging task to reduce the size of an uncertainty set without changing the resulting objective value too much. On the other hand, robust optimization problems with many scenarios tend to be hard to solve, in particular for two-stage problems. Hence, a reduced uncertainty set may be central to find solutions in reasonable time. We propose scenario reduction methods that give guarantees on the performance of the resulting robust solution. Scenario reduction problems for one- and two-stage robust optimization are framed as optimization problems that only depend on the uncertainty set and not on the underlying decision making problem. Experimental results indicate that objective values for the reduced uncertainty sets are closely correlated to original objective values, resulting in better solutions than when using general-purpose clustering methods such as K-means.

 

Authors: Marc Goerigk and Mohammad Khosravi

Bio:

Marc Goerigk is Professor for Business Decisions and Data Science at the University of Passau. His research focuses mainly on robust decision making for  combinatorial problems, in particular regarding aspects of complexity and making the best use of data. Marc holds a PhD in mathematics from the University of Göttingen. Before joining his colleagues in Passau, he was Professor at the University of Siegen, lecturer at Lancaster University and post-doctoral researcher at the University of Kaiserslautern.

April 21, 2023

17:00 (CET)

University of Southern California


Title: Learning Optimal Classification Trees Robust to Distribution Shifts


Abstract: 

We consider the problem of learning classification trees that are robust to distribution shifts between training and testing/deployment data. This problem arises frequently in high stakes settings such as public health and social work where data is often collected using self-reported surveys which are highly sensitive to e.g., the framing of the questions, the time when and place where the survey is conducted, and the level of comfort the interviewee has in sharing information with the interviewer. We propose a method for learning optimal robust classification trees based on mixed-integer robust optimization technology. In particular, we demonstrate that the problem of learning an optimal robust tree can be cast as a single-stage mixed-integer robust optimization problem with a highly nonlinear and discontinuous objective. We reformulate this problem equivalently as a two-stage linear robust optimization problem for which we devise a tailored solution procedure based on constraint generation. We evaluate the performance of our approach on numerous publicly available datasets, and compare the performance to a regularized, non-robust optimal tree. We show an increase of up to 14.16% in worst-case accuracy and of up to 4.72% in average-case accuracy across several datasets and distribution shifts from using our robust solution in comparison to the non-robust one.

Bio:

Phebe Vayanos is an Associate Professor of Industrial & Systems Engineering and Computer Science at the University of Southern California. She is also an Associate Director of CAIS, the Center for Artificial Intelligence in Society at USC. Her research is focused on Operations Research and Artificial Intelligence and in particular on optimization and machine learning. Her work is motivated by problems that are important for social good, such as those arising in public housing allocation, public health, and biodiversity conservation. Prior to joining USC, she was lecturer in the Operations Research and Statistics Group at the MIT Sloan School of Management, and a postdoctoral research associate in the Operations Research Center at MIT. She holds a PhD degree in Operations Research and an MEng degree in Electrical & Electronic Engineering, both from Imperial College London. She is a recipient of the NSF CAREER award and the INFORMS Diversity, Equity, and Inclusion Ambassador Program Award. 

March 24, 2023

15:00 (CET)

University of Amsterdam

Title: Finding regions of counterfactual explanations via robust optimization

Imperial College London

Title: Differential Privacy via Distributionally Robust Optimization 

Abstract:

Counterfactual explanations (CEs) are critical for detecting bias and improving the explainability of data-driven classification models. A counterfactual explanation is a minimal perturbed data point for which the decision of the model changes. However, most of the existing methods can only provide one CE, which may not be feasible for the user. In this talk, it will be presented an iterative method that calculates robust CEs. These are CEs that remain valid even after the features are slightly perturbed. The method provides a whole region of CEs, allowing the user to choose a suitable recourse to obtain a desired outcome. It will be discussed how to use algorithmic ideas from robust optimization and prove convergence results for the most common machine learning methods, including logistic regression, decision trees, random forests, and neural networks. 

Abstract:

In recent years, differential privacy has emerged as the de facto standard for sharing statistics of datasets while limiting the disclosure of private information about the involved individuals. This is achieved by randomly perturbing the statistics to be published, which in turn leads to a privacy-accuracy trade-off: larger perturbations provide stronger privacy guarantees, but they result in less accurate statistics that offer lower utility to the recipients. Of particular interest are therefore optimal mechanisms that provide the highest accuracy for a pre-selected level of privacy. To date, work in this area has focused on specifying families of perturbations a priori and proving their asymptotic and/or best-in-class optimality.


In this paper, we develop a class of mechanisms that enjoy non-asymptotic and unconditional optimality guarantees. To this end, we formulate the mechanism design problem as an infinite-dimensional distributionally robust optimization problem. We show that this problem affords a strong dual, and we develop converging hierarchies of finite-dimensional upper and lower bounding problems. Our upper bounds correspond to implementable perturbations whose suboptimality can be bounded by our lower bounds. Both bounding problems can be solved within seconds via cutting plane techniques that exploit the inherent problem structure. Our numerical experiments demonstrate that our perturbations can outperform the previously best results from the literature on artificial as well as benchmark problems.

Co-authors: Huikang Liu & Wolfram Wiesemann 

Bio:

Donato Maragno is a PhD candidate at the Department of Business Analytics, University of Amsterdam in the Netherlands. His research interest centers around exploring the integration of machine learning and optimization techniques. He has contributed to the development of OptiCL, an open-source tool that enables optimization with constraint learning.  

Bio:

Aras Selvi is a PhD candidate in Operations Research at Imperial College Business School, where he is supervised by Wolfram Wiesemann as a member of the Models and Algorithms for Decision-Making under Uncertainty research group. His research interests are at the intersection of robust optimization, machine learning, and computational privacy. He is currently a fellow of The Alan Turing Institute as a recipient of the PhD Enrichment Scheme placement program. Aras is also affiliated with the Computational Optimization Group and the Data Science Institute of Imperial College London. 

March 10, 2023

15:00 (CET)

Technion - Israel Institute of Technology

Title: An Algorithm for Maximizing a Convex Function Based on its Minimum

Abstract:

In this talk an algorithm for maximizing a convex function over a convex feasible set (a common problem in Robust Analysis) is proposed. The algorithm consists of two phases: in phase 1 a feasible starting point is obtained that is used in a Gradient Descent algorithm in phase 2. The main contribution of the paper is connected to phase 1; five different methods are used to approximate the original NP-hard problem of maximizing a convex function (MCF) by a tractable convex optimization problem. All the methods use the minimizer of the convex objective function in their construction. The performance of the overall algorithm is tested on a wide variety of MCF problems, demonstrating its efficiency.

Bio:

Aharon Ben-Tal is a Professor of Operations Research at the Technion – Israel Institute of Technology.  He received his Ph.D. in Applied Mathematics from Northwestern University in 1973.  He has been a Visiting Professor at the University of Michigan, University of Copenhagen, Delft University of Technology, MIT and CWI Amsterdam, Columbia and NYU.  His interests are in Continuous Optimization, particularly nonsmooth and large-scale problems, conic and robust optimization, as well as convex and nonsmooth analysis. In recent years the focus of his research is on optimization problems affected by uncertainty; he is one if the founders of the Robust Optimization methodology to address these problems. In the last 20 years he has devoted much effort to engineering applications of optimization methodology and computational schemes. Some of the algorithms developed in the MINERVA Optimization Center are in use by Industry (Medical Imaging, Aerospace). He has published more than 135 papers in professional journals and co-authored three books.   Prof. Ben-Tal was Dean of the Faculty of Industrial Engineering and Management at the Technion (1989-1992) and (2011-2014). He served in the editorial board of all major OR/Optimization journals. He gave numerous plenary and keynote lectures in international conferences.

 

In 2007 Professor Ben-Tal was awarded the EURO Gold Medal - the highest distinction of Operations Research within Europe.

 

In 2009 he was named Fellow of INFORMS.

 

In 2015 he was named Fellow of SIAM.

 

In 2016 he was awarded by INFORMS the Khachiyan Prize for Lifetime Achievement in the area of Optimization.

 

In 2017, the Operation Research Society of Israel (ORSIS) awarded him the Lifetime Achievement Prize.

 

As of February 2023 his work has over 33,000 citations (Google scholar).


February 24, 2023 

15:00 (CET)

Massachusetts Institute of Technology

Title: Robust Optimization with Information Theory Inspired Uncertainty Sets and its Applications 

Abstract

Classical approaches in robust optimization (RO) involve out-of-the-box uncertainty sets such as norm, budget or polyhedral sets. Apart from leveraging available data to cross-validate hyperparameters, out-of-the-box uncertainty sets seldom incorporate distributional information, so they inherently do not capture the real-world behavior of uncertain parameters.

Distributionally Robust Optimization (DRO), by contrast, satisfies constraints in expectation among a class of underlying distributions.  We propose a new type of uncertainty set motivated by information theory, which incorporates the probability distributions directly. Initial computational tests with our approach yield reductions in the maximum and mean violation as compared to classical RO and DRO. We apply this novel uncertainty set in a real-world setting of the Hartford Hospital's Bone and Joint Institute to optimize the patient census while considering the uncertainty in the rest times of patients after surgery. We generate a timetable for surgeons that reduces the monthly census by up to 10%, outperforming both out-of-the-box uncertainty sets and DRO. (Joint work with Benjamin Boucher, MIT).



Bio

Dimitris Bertsimas is the Boeing Professor of Operations Research and the Associate Dean of Business Analytics at the Massachusetts Institute of Technology. He is a member of the US National Academy of Engineering, an INFORMS fellow, recipient of the John von Neumann Theory Prize, the Frederick W. Lanchester Prize, the Erlang Prize, finalist of the Franz Edelman Prize four times, and the INFORMS President’s Award, among many other research and teaching awards, supervisor of 88 completed and 25 current doctoral theses, editor of the INFORMS Journal on Optimization, co-author of seven textbooks and co-founder of ten analytics companies and two foundations. 


February 10, 2023 

15:00 (CET)

Imperial College London

Title: Randomized Assortment Optimization 

Abstract

When a firm selects an assortment of products to offer to customers, it uses a choice model to anticipate their probability of purchasing each product. In practice, the estimation of these models is subject to statistical errors, which may lead to significantly suboptimal assortment decisions. Recent work has addressed this issue using robust optimization, where the true parameter values are assumed unknown and the firm chooses an assortment that maximizes its worst-case expected revenues over an uncertainty set of likely parameter values, thus mitigating estimation errors. In this talk, we introduce the concept of randomization into the robust assortment optimization literature. We show that the standard approach of deterministically selecting a single assortment to offer is not always optimal in the robust assortment optimization problem. Instead, the firm can improve its worst-case expected revenues by selecting an assortment randomly according to a prudently designed probability distribution. We demonstrate this potential benefit of randomization both theoretically in an abstract problem formulation as well as empirically across three popular choice models: the multinomial logit model, the Markov chain model, and the preference ranking model. We show how an optimal randomization strategy can be determined exactly and heuristically. Besides the superior in-sample performance of randomized assortments, we demonstrate improved out-of-sample performance in a data- driven setting that combines estimation with optimization. Our results suggest that more general versions of the assortment optimization problem—incorporating business constraints, more flexible choice models and/or more general uncertainty sets—tend to be more receptive to the benefits of randomization.


Bio

Wolfram Wiesemann is Professor of Analytics & Operations as well as the head of the Analytics, Marketing & Operations department at Imperial College Business School. His research interests evolve around decision-making under uncertainty, with applications to supply chain management, healthcare and energy. Wolfram is an elected member of the boards of the Mathematical Optimization Society and the Stochastic Programming Society, and he serves as an area editor for Operations Research Letters as well as an associate editor for Mathematical Programming, Operations Research, Manufacturing & Service Operations Management and SIAM Journal on Optimization.