Season 2: Sep 2023 - Dec 2023

December 15, 2023

15:00 (CET)

University of Illinois at Chicago  


Title: Improving the Security of United States Elections with Robust Optimization

Abstract:

For more than a century, election officials across the United States have inspected voting machines before elections using a procedure called Logic and Accuracy Testing (LAT). This procedure consists of election officials casting a test deck of ballots into each voting machine and confirming the machine produces the expected vote total for each candidate.

 

In this talk, I will bring a scientific perspective to LAT by introducing the first formal approach to designing test decks with rigorous security guarantees. Specifically, we propose using robust optimization to find test decks that are guaranteed to detect any voting machine misconfiguration that would cause votes to be swapped across candidates. Out of all the test decks with this security guarantee, the robust optimization problem yields the test deck with the minimum number of ballots, thereby minimizing implementation costs for election officials. To facilitate deployment at scale, we developed a practical exact algorithm for solving our robust optimization problems based on mixed-integer optimization and the cutting plane method.

 

In partnership with the Michigan Bureau of Elections, we retrospectively applied our robust optimization approach to all 6928 ballot styles from Michigan's November 2022 general election; this retrospective study reveals that the test decks with rigorous security guarantees obtained by our approach require, on average, only 1.2% more ballots than current practice. Our robust optimization approach has since been piloted in real-world elections by the Michigan Bureau of Elections as a low-cost way to improve election security and increase public trust in democratic institutions.

 

This is joint work with Alex Halderman (Michigan) and Braden Crimmins (Stanford and Michigan), and our preprint is available at https://arxiv.org/pdf/2308.02306.pdf.


Bio:

Brad Sturt an assistant professor of business analytics at the University of Illinois Chicago. His current research focuses on the application of robust optimization to revenue management and public policy. He is also broadly interested in using robust optimization to design tractable approaches for dynamic optimization problems. His research has received several recognitions, including second place in the INFORMS Junior Faculty Interest Group (JFIG) Paper Competition and second place in the INFORMS George Nicholson Student Paper Competition. 

December 1, 2023

15:00 (CET)

Esther Julien

TU Delft


Title: Machine Learning for K-adaptability in Two-stage Robust Optimization

Justin Dumouchelle

University of Toronto


Title: Neur2RO: Neural Two-Stage Robust Optimization

Abstract:

Two-stage robust optimization problems constitute one of the hardest optimization problem classes. One of the solution approaches to this class of problems is K-adaptability. This approach simultaneously seeks the best partitioning of the uncertainty set of scenarios into K subsets and optimizes decisions corresponding to each of these subsets.  In the general case, it is solved using the K-adaptability branch-and-bound algorithm, which requires the exploration of exponentially growing solution trees. To accelerate finding high-quality solutions in such trees, we propose a machine learning-based node selection strategy. In particular,  we construct a feature engineering scheme based on general two-stage robust optimization insights that allows us to train our machine learning tool on a database of resolved B&B trees, and to apply it as-is to problems of different sizes and/or types. We experimentally show that using our learned node selection strategy outperforms a  vanilla, random node selection strategy when tested on problems of the same type as the training problems, also in case the K-value or the problem size differs from the training ones.  This work is in collaboration with Krzysztof Postek (BCG) and Ilker Birbil (UvA).

Abstract:

Robust optimization provides a mathematical framework for modeling and solving decision-making problems under worst-case uncertainty. This work addresses two-stage robust optimization (2RO) problems (also called adjustable robust optimization), wherein first-stage and second-stage decisions are made before and after uncertainty is realized, respectively. This results in a nested min-max-min optimization problem which is extremely challenging computationally, especially when the decisions are discrete. We propose Neur2RO, an efficient machine learning-driven instantiation of column-and-constraint generation (CCG), a classical iterative algorithm for 2RO. Specifically, we learn to estimate the value function of the second-stage problem via a novel neural network architecture that is easy to optimize over by design. Embedding our neural network into CCG yields high-quality solutions quickly as evidenced by experiments on two 2RO benchmarks, knapsack and capital budgeting. For knapsack, Neur2RO finds solutions that are within roughly of the best-known values in a few seconds compared to the three hours of the state-of-the-art exact branch-and-price algorithm; for larger and more complex instances, Neur2RO finds even better solutions. For capital budgeting, Neur2RO outperforms three variants of the -adaptability algorithm, particularly on the largest instances, with a 5 to 10-fold reduction in solution time. 

Our code and data are available at https://github.com/khalil-research/Neur2RO.

Bio:

Esther is a PhD student at TU Delft. Her interests lie in embedding machine learning techniques into algorithm design to obtain faster/better results, currently for projects in robust optimization and phylogenetics.

Bio:

Justin is a PhD candidate at the University of Toronto. His research interests are at the intersection of machine learning and operations research, with a current focus on developing learning-based algorithms for discrete optimization, stochastic programming, and robust optimization.

November 17, 2023

15:00 (CET)

Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM) 


Title: Combinatorial Robust Optimization with Decision-Dependent Information Discovery and Polyhedral Uncertainty

Abstract: 

Given a nominal combinatorial optimization problem, we

consider a robust two-stages variant with polyhedral cost uncertainty,

called DDID. In the first stage, DDID selects a subset of uncertain cost

coefficients to be observed, and in the second-stage, DDID selects a

solution to the nominal problem, where the remaining cost coefficients

are still uncertain. Given a compact linear programming formulation for

the nominal problem, we provide a mixed-integer linear programming

(MILP) formulation for DDID. The MILP is compact if the number of

constraints describing the uncertainty polytope other than lower and

upper bounds is constant. In that case, the formulation leads to

polynomial-time algorithms for DDID when the number of possible

observations is polynomially bounded. We extend this formulation to more

general nominal problems through column generation and constraint

generation algorithms. We illustrate our reformulations and algorithms

numerically on the selection problem, the orienteering problem, and the

spanning tree problem.


Link to paper: https://hal.science/hal-04097679/ 

Bio:

Michael Poss is a senior research fellow at the CNRS, in the LIRMM laboratory. His current research focuses mainly on robust combinatorial optimization. He his the founding managing editor of the Open Journal of Mathematical Optimization (https://ojmo.centre-mersenne.org/).

November 3, 2023

15:00 (CET)

University of Amsterdam


Title: An exact method for a class of robust nonlinear optimization problems. 


Trier University


Title: Using Column Generation in Column-and-Constraint Generation for Adjustable Robust Optimization 


Abstract:

We propose a novel exact approach to solve optimization problems with robust nonlinear constraints that are neither convex nor concave but sums of products of linear times concave functions with respect to the uncertain parameters. The proposed approach combines a cutting set method with reformulation-perspectification techniques and branch and bound. We show that our approach also applies to robust convex optimization, which is a sub-class of problems involving sum of linear times concave functions in the uncertain parameters, thereby extending the existing literature. Numerical experiments on a robust convex geometric optimization problem, a robust quadratic optimization problem with data uncertainty and implementation error, and a lot sizing problem in which we impose a quadratic decision rule, demonstrate the effectiveness of the proposed approach. 

Abstract:

Column-and-constraint-generation (CCG) algorithms are among the most popular approaches for solving adjustable robust problems. In the recent years, they have been successfully applied to a variety of problems, most of which have continuous wait-and-see decisions. While it is typically acknowledged that CCG can scale relatively well for problems with continuous wait-and-see decisions, the same cannot be claimed for problems in which (mixed-)integer decisions must be taken after the uncertainty is revealed. In this talk, we computationally evaluate the performance of CCG algorithms when applied to problems with mixed-integer second-stage decisions and discuss its main weaknesses. Then, we show how column-generation techniques can be employed to alleviate these burdens and successfully reduce the computational time needed to prove optimality. Additionally, we also introduce a new heuristic method, which is based on the column-generation procedure to quickly find feasible solutions at each step of the CCG. The scheme can also be enhanced with domain-specific knowledge on the deterministic version of the problem being solved and can exploit parallelism. Finally, we also highlight that the method can, at each iteration, be warm-started by knowledge gained at previous iterations of the CCG. This is in contrast to standard CCG algorithms where each iteration solves a new problem from scratch. The computational benefits of the proposed scheme is then evaluated on two optimization problems arising in logistics and scheduling. 

Bio:

Danique is a PhD-candidate at the University of Amsterdam. Her research focuses on finding new methods to globally solve hard minimization problems with concave parts in the objective and/or constraint functions. These methods can also be used to solve robust nonlinear optimization problems exactly.

Bio:

Henri is currently a post-doc researcher at Universität Trier (Germany). He received a Master degree in "Optimization and learning of systems" from the Université de Technologie de Compiègne (France) in 2019 as well as an Engineering degree in "Production systems" from the Università degli studi di Genova (Italy) the same year. He earned his Ph.D. at the Università di Bologna (Italy) under the supervision of Michele Monaci and Enrico Malaguti where he studied adjustable robust optimization problems with nonlinear recourses. His main research interests are in optimization under uncertainty and decomposition methods for large-scale applications. In particular, his work focuses on exact methods for two-stage robust problems with mixed-integer convex wait-and-see decisions using decomposition approaches.

October 20, 2023

15:00 (CET)

Erasmus University Rotterdam 


Title: Two-Stage Robust Quadratic Optimization with Equalities and its Application to Optimal Power Flow 

Abstract: 

In this work, we consider two-stage quadratic optimization problems under ellipsoidal uncertainty. In the first stage, one needs to decide upon the values of a subset of optimization variables (control variables). In the second stage, the uncertainty is revealed, and the rest of optimization variables (state variables) are set up as a solution to a known system of possibly non-linear equations. This type of problem occurs, for instance, in optimization for dynamical systems, such as electric power systems as well as gas and water networks. We propose a convergent iterative algorithm to build a sequence of approximately robustly feasible solutions. At each iteration, the algorithm optimizes over a subset of the feasible set and uses affine approximations of the second-stage equations while preserving the non-linearity of other constraints. We implement our approach for AC Optimal Power Flow and demonstrate the performance of our proposed method on Matpower instances. This work focuses on quadratic problems, but the approach is suitable for more general setups.

 

This topic is based on the following working paper: https://arxiv.org/abs/2104.03107. This is joint work with Bissan Ghaddar from Ivery Business School and Daniel Molzahn from Georgia Tech.




Bio:

Olga is an assistant professor of optimization at the Econometric Institute of Erasmus University Rotterdam. She develops solution approaches and approximation algorithms for non-linear optimization problems using convex programming. Typical applications for her research are network-based problems, such as optimization of energy, water, and transport systems.  

October 6, 2023

15:00 (CET)

National University of Singapore 


Title: The Occam's Razor of Robustness 

Abstract: 

Robust optimization has become an indispensable tool in tackling uncertainties in optimization. The budgeted uncertainty set introduced by Bertsimas and Sim (2004) is a prevalent approach in this field due to its effectiveness in addressing a host of practical linear optimization problems under uncertainty, though challenges remain in multi-period decision models. In this work, we revisit this approach with a distinct angle: bypassing traditional support boundaries and introducing a robust satisficing model where preferences are conveyed through targets. At the core of our discussion is the cardinal fragility criterion, which evaluates a solution’s vulnerability to input deviations in reaching a target. Our analysis employs a prediction disparity metric rooted in the L1-norm, echoing Bertsimas and Sim (2004), but diverges with its boundary-free support. This fragility construct seamlessly integrates into a multi-attribute robust satisficing model, prioritizing alignment with predetermined attribute targets. By eliminating support constraints, we extend the boundaries of our computational reach. Our approach proficiently handles multiperiod models by employing direct and straightforward formulations, without introducing dual variables or resorting to approximations. Additionally, we present a method to optimize the parameters of the prediction disparity metric using prediction residuals, ensuring the desired confidence profile across various target attainment levels. We conducted two studies: one focusing on portfolio optimization, and the other using simulation to explore if heightened crashing costs can enhance the likelihood of on-time project completion.



Bio:

Dr. Melvyn Sim is Professor and Provost's Chair at the Department of Analytics & Operations, NUS Business school. His research interests fall broadly under the categories of decision making and optimization under uncertainty with applications ranging from finance, supply chain management, healthcare to engineered systems. He is one of the active proponents of Robust Optimization and Satisficing, and has given keynote seminars at international conferences.  Dr. Sim is currently serving as a Department Editor of MSOM, and as an associate editor for Operations Research, Management Science and INFORMS Journal on Optimization.

September 22, 2023

15:00 (CET)

Trier University


Title:The Connections Between Bilevel and Robust Optimization 

Abstract: 

Robust optimization is a prominent approach in optimization to deal

with uncertainties in the data of the optimization problem by hedging

against the worst-case realization of the uncertain event. Doing this

usually leads to a multilevel structure of the mathematical

formulation that is very similar to what we are used to consider in

bilevel optimization. Hence, these two fields are closely related

but the study of their combination is still in its infancy.

In this talk, we analyze the connections between different types of

robust problems (strictly robust problems with and without

decision-dependence of their uncertainty sets, min-max-regret

problems, and two-stage robust problems) as well as of bilevel problems

(optimistic problems, pessimistic problems, and robust bilevel

problems).

We hope that these results pave the way for a stronger connection

between the two fields---in particular to use both theory and

algorithms from one field in the other and vice versa.


Bio:

Short CV

Since 1/2019: Full professor (W3) for “Nonlinear Optimization” at Trier University

4/2014-9/2018: Junior professor (W1) for “Optimization of Energy Systems” at the Friedrich-Alexander-Universität Erlangen-Nürnberg

1/2013: PhD in mathematics at the Leibniz Universität Hannover

10/2008: Diplom in mathematics (minor computer science) at the Leibniz Universität Hannover

Editorial Work

 Since June 2022: Editorial Board member of the Journal of Optimization Theory and Applications

Since May 2022: Editorial Board member of Optimization Letters

Since 2021: Associate Editor of OR Spectrum

 Since 2021: Associate Editor of the EURO Journal on Computational Optimization

Since 2011: Technical Editor of Mathematical Programming Computation 

Awards

Optimization Letters best paper award for the year 2021 

Marguerite Frank Award for the best EJCO paper in 2021

MMOR Best Paper Award 2021

1st price (category: scientific) of the ROADEF/EURO Challenge 2020

Howard Rosenbrock Prize 2020

EURO Excellence in Practice Award 2016

September 8, 2023

15:00 (CET)

Dick den Hertog and Thodoris Koukouvinos 

University of Amsterdam and Massachusetts Institute of Technology


Title: A novel algorithm for a broad class of nonconvex optimization problems  -  and how this can be used in Robust Optimization

Abstract: 

We propose a new global optimization approach for solving nonconvex optimization problems in which the nonconvex components are sums of products of convex functions. A broad class of nonconvex problems can be written in this way, such as concave minimization problems, difference of convex problems, and fractional optimization problems. Our approach exploits two techniques: first, we introduce a new technique, called the Reformulation-Perspectification Technique (RPT), to obtain a convex approximation of the considered nonconvex continuous optimization problem. Next, we develop a spatial Branch and Bound scheme, leveraging RPT, to obtain a global optimal solution. Numerical experiments on four different convex maximization problems, a quadratic constrained quadratic optimization problem, and a dike height optimization problem demonstrate the effectiveness of the proposed approach. In particular, our approach solves more instances to global optimality for the considered problems than BARON and SCIP. Moreover, for problem instances of larger dimension our approach outperforms both BARON and SCIP on computation time for most problem instances, while for smaller dimension BARON overall performs better on computation time.  Finding the worst-case scenario in Robust Optimization is often a nonconvex optimization problem. We therefore discuss how this novel algorithm can be used to solve robust and adaptive nonlinear optimization problems.

Bio:

Dick den Hertog is a professor of Operations Research at the University of Amsterdam. His research interests cover various fields in prescriptive analytics, in particular linear and nonlinear optimization. In recent years his main focus has been on robust optimization, and recently he started research on Optimization with Machine Learning. He is also active in applying the theory in real-life applications. In particular, he is interested in applications that contribute to a better society. He received the INFORMS Franz Edelman Award twice: in 2013 for his research on optimal flood protection, and in 2021 for his research on optimizing the food supply chain for the UN World Food Programme. Currently, he is doing research to develop better optimization models and techniques for cancer treatment together with researchers from Harvard Medical School and Massachusetts General Hospital (Boston, USA), and he is involved in research to optimize the locations of healthcare facilities, e.g., in Timor-Leste and Vietnam, together with the World Bank. He has been Visiting Professor at MIT for several years now. He is  Science-to-Impact Director of the Analytics for a Better World Institute, which he co-founded in 2022.

Bio:

Thodoris Koukouvinos is a fourth year PhD student at MIT, Operations Research Center, advised by Prof. Dimitris Bertsimas. Before joining MIT, he studied Applied Mathematics and Physics in the National Technical University of Athens, in Greece. His research interests include ethics in machine learning, robust optimization and non convex optimization.