MARIA BAZOTTE
Abstract: Real-world decision-making problems often involve decision-dependent uncertainty, where the probability distribution of the random vector depends on the model's decisions. Few studies focus on two-stage stochastic programs with this type of endogenous uncertainty, and those that do lack general methodologies. We propose a general method for solving a class of these programs based on random variable transformation, a technique widely employed in probability and statistics. The random variable transformation converts a stochastic program with endogenous uncertainty into an equivalent stochastic program with decision-independent uncertainty, for which solution procedures are well-studied. Endogenous uncertainty usually leads to nonlinear nonconvex programs, which are theoretically intractable. However, we show that, for some classical endogenous distributions, the proposed method yields mixed-integer linear or convex programs with exogenous uncertainty. We validate this method by applying it to a network design and facility-protection problem, considering distinct decision-dependent distributions for the random variables. While the original formulation of this problem is nonlinear nonconvex for most endogenous distributions, the proposed method transforms it into mixed-integer linear programs with exogenous uncertainty.
Bio: Maria Bazotte is a PhD candidate at Polytechnique Montreal, where she is supervised by Margarida Carvalho and Thibaut Vidal. Her research focuses on tackling challenges in solving stochastic programs with decision-dependent uncertainty. She is interested in decision-making and optimization under uncertainty, with applications ranging from school choice and network design to healthcare management problems.
HAOYUN DENG
Abstract: We study stochastic mixed integer programs with both first-stage and recourse decisions involving mixed integer variables. A new family of Lagrangian cuts, termed "ReLU Lagrangian cuts," is introduced by reformulating the nonanticipativity constraints using ReLU functions. These cuts can be integrated into scenario decomposition methods. We show that including ReLU Lagrangian cuts is sufficient to achieve optimality in the original stochastic mixed integer programs. Without solving the Lagrangian dual problems, we derive closed-form expressions for these cuts. Furthermore, to speed up the cut-generating procedures, we introduce linear programming-based methods to enhance the cut coefficients. Numerical studies demonstrate the effectiveness of the proposed cuts compared to existing cut families.
Bio: Haoyun Deng is a third-year Ph.D. student in Operations Research at the H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, advised by Dr. Weijun Xie. She holds a B.S. in Mathematics and Applied Mathematics from the Shanghai University of Finance and Economics in China. Her research focuses on stochastic programming and its applications, with an emphasis on developing efficient methods for solving two-stage and multistage problems with discrete decisions.
Abstract: We consider a two-stage stochastic multi-objective linear program (TSSMOLP) which is a natural multi-objective generalization of the well-studied two-stage stochastic linear program. The second-stage recourse decision is governed by an uncertain multi-objective linear program whose solution maps to an uncertain second-stage nondominated set. The TSSMOLP then comprises the objective function, which is the Minkowsi sum of a linear term plus the expected value of the second-stage nondominated set, and the constraints, which are linear. Since the second-stage nondominated set is a random set, its expected value is defined through the selection expectation. The global Pareto set is defined as the collection of nondominated points in the image space of the TSSMOLP. We discuss properties of TSSMOLPs and the multifunctions that arise therein, as well as the implications of these properties for the development of TSSMOLP solution methods. We illustrate the TSSMOLP and its properties through an example in disaster relief planning.
This work is joint work with Akshita Gupta, Edwardson School of Industrial Engineering, Purdue University.
Bio: Susan R. Hunter is an associate professor in the Edwardson School of Industrial Engineering at Purdue University. Her research interests include theoretical and algorithmic aspects of stochastic optimization in the presence of multiple performance measures with emphasis on asymptotics, computation, and application. She is the recipient of an NSF CAREER Award, and her published works have been recognized by the INFORMS Computing Society in 2011, by IISE Transactions in 2017, and by The Operational Research Society in 2021. She currently serves as Vice President / President Elect of the INFORMS Simulation Society and as an associate editor for Flexible Services and Manufacturing Journal, INFORMS Journal on Data Science, Journal of Optimization Theory and Applications, and Operations Research. She previously served as the Program Chair for the 2024 Winter Simulation Conference and as an associate editor for IISE Transactions. She is an alumna of the Department of Statistics at North Carolina State University and holds a Ph.D. in Industrial and Systems Engineering from Virginia Tech. Prior to joining Purdue, she worked as a postdoctoral associate in Operations Research and Information Engineering at Cornell University.
Abstract: Recent advances in operations research (OR) and machine learning (ML) have spurred interest in integrating prediction algorithms with optimization techniques to address decision-making under uncertainty. This has led to the emergence of contextual optimization, a field focused on data-driven methods that prescribe actions based on the most recently available information. These models appear in both OR and ML literature under various names, including data-driven optimization, prescriptive optimization, predictive stochastic programming, policy optimization, (smart) predict-then-optimize, decision-focused learning, and (task-based) end-to-end learning/forecasting/optimization.
In this talk, we will see that these approaches can be unified under the contextual optimization framework. Then, I will discuss some models and methodologies for learning policies from data, with a particular emphasis on decision-focused learning, highlighting its role in improving prescriptive analytics and the associated challenges.
Bio: Utsav Sadana is an Assistant Professor in the Department of Computer Science and Operations Research at the Université de Montréal, Canada, and a member of GERAD and CIRRELT. He earned a dual degree from the Indian Institute of Technology, Kanpur, completing a Bachelor’s in Materials Science and Engineering and a Master’s in Economics in 2017. He later obtained his Ph.D. in Management Science from HEC Montréal in 2021 and subsequently held a postdoctoral fellowship at McGill University until 2023. He was also a visiting researcher at the Coordinated Science Lab, University of Illinois at Urbana-Champaign (USA), from September 2019 to February 2020. His research focuses on dynamic game theory and data-driven optimization, with a keen interest in distributionally robust optimization for decision-making under uncertainty.
Abstract: More than 35 years ago, the stochastic dual dynamic programming (SDDP) algorithm was proposed as an efficient method to solve multistage stochastic optimization problems when the planning horizon is large, motivated by the strategic operation planning for the Brazilian hydroelectric systems. SDDP uses duality to build lower bounds for the value functions, constructing piecewise linear under-approximations, and relies on stochastic sampling of the uncertainty to both explore the state space and evaluate policies. In 2011, prompted by recent rationing experiences in Brazil, SDDP was extended to incorporate risk aversion, which lead to the need for new algorithms for evaluating policy quality. Using duality again, we show that it is possible to build a recursion that approximates the value functions in risk-averse problems from above, and further establish a link between a primal approach for constructing such upper bounds. We conclude showing that the induced dual policies also enjoy a natural performance guarantee, and compare them to primal ones in hydrothermal power planning problems.
Bio: Bernardo Costa is Associate Professor in the School of Applied Mathematics at Fundação Getulio Vargas in Rio de Janeiro. Bernardo graduated from École Polytechnique in 2009, and went on to obtain a Ph.D. in Mathematics from Université Paris-Sud in 2012. He also holds a B.Sc. degree in Mathematics from Universidade Federal do Rio de Janeiro, and was a visiting scholar at Georgia Tech in 2019. He has developed models and algorithms in stochastic optimization, and especially in multistage problems with applications in energy and finance. Bernardo also has longstanding collaborations with the Brazilian Independent System Operator (ONS) for the analysis, modeling and development of optimization tools for the Brazilian power system, and is organizing the upcoming 2025 edition of the Hydropower Scheduling Conference.
Abstract: Designing policies for a network of agents is typically done by formulating an optimization problem where each agent has access to state measurements of all the other agents in the network. Such policy designs with centralized information exchange result in optimization problems that are typically hard to solve, require establishing substantial communication links, and do not promote privacy since all information is shared among the agents. Designing policies based on arbitrary communication structures can lead to nonconvex optimization problems that are typically NP-hard. In this work, we propose an optimization framework for decentralized policy designs. In contrast to the centralized information exchange, our approach requires only local communication exchange among the neighboring agents matching the physical coupling of the network. Thus, each agent only requires information from its direct neighbors, minimizing the need for excessive communication and promoting privacy amongst the agents. Using robust optimization techniques, we formulate a convex optimization problem with a loosely coupled structure that can be solved efficiently. We numerically demonstrate the efficacy of the proposed approach in energy management and supply chain applications. We show that the proposed approach leads to solutions that closely approximate those obtained by the centralized formulation only at a fraction of the computational effort.
Bio: Angelos Georghiou received his M.Sci. degree in Mathematics in 2008 and his Ph.D.degree in Operations Research at the Department of Computing in 2012, both from Imperial College London. During 2012–2013, he was a post-doctoral researcher at the Process Systems Engineering Laboratory at MIT, and in 2013–2016 a postdoctoral researcher at the Automatic Control Laboratory at ETH Zurich. From 2016-2019 he was Assistant Professor of Operations Management in the Desautels Faculty of Management at McGill University. He is currently an Assistant Professor of Operations Management at the Department of Business and Public Administration at the University of Cyprus. His research focuses on the development of tractable computational methods for the solution of stochastic and robust optimization problems, as well as applications in operations management, healthcare and energy.
January 17, 2025, 10-11am ET
Recording: Link
Slides: To be posted later..
Abstract: In optimization and decision recommendation problems that involve parameter information gathered from multiple data sources with varying reliability, incorporating users' trust in optimization models can potentially improve their solution quality and performance. In this talk, we discuss a multi-reference distributionally robust optimization (MR-DRO) framework, where the model inputs are stochastic, and their distributions could be statistically inferred from multiple sources. We show trust-aided parametric and nonparametric ways of constructing ambiguity sets that account for both uncertainty and unknown reliability of the information sources, and derive tractable reformulations of the MR-DRO model. We use a dynamic trust update mechanism that adjusts or learns the true trust/data reliability over time as more information becomes available. We give three examples of applying MR-DRO to (i) resource allocation, (ii) portfolio optimization, and (iii) route recommendation during evacuation, to demonstrate the effectiveness of the trust-informed MR-DRO approach compared to models that rely on a single data source. Our results highlight the significance of integrating (dynamic) user trust in decision making under uncertainty, particularly when given diverse and potentially conflicting information sources.
Bio: Siqian Shen is a Professor in the Department of Industrial and Operations Engineering at the University of Michigan at Ann Arbor, and was an Associate Director for the Michigan Institute for Computational Discovery & Engineering (MICDE) from 2016-2023. Siqian obtained her B.S. degree from Tsinghua University in 2007 and her Ph.D. from the University of Florida in 2011. Her research studies models and algorithms in stochastic integer programming, network optimization, and distributionally robust optimization, with applications mainly in energy, transportation and logistics. Prof. Shen is currently serving on the editorial boards for several journals, including Operations Research, Manufacturing & Service Operations Management, European Journal of Operational Research, and was an Associate Editor for Networks, Transportation Science, INFORMS Journal on Computing, Service Science, and IISE Transactions. Prof. Shen has received several recognitions and awards, including the 2024 US National Science Foundation Director’s Award for Superior Accomplishment (Group), 2017 US Department of Energy (DoE) Early Career Award, the 2012 IIE Pritsker Doctoral Dissertation Award (1st Place), and several best paper prizes from INFORMS, IEEE, and IISE.
Abstract: Over the past two decades, robust optimization techniques have efficiently addressed decision problems under uncertainty, offering high assurance of feasibility without being overly conservative. However, research on estimating parameters for these robust decision models from data has been lacking. In this paper, we focus on a unified framework for robust decision models that integrate robust optimization and robust satisficing paradigms. In particular, we identify two layers of uncertainty: outcome uncertainty, involving deviations from specified inputs based on historical data, and estimation uncertainty, addressing deviations from latent input vectors, such as the unobservable true mean. We introduce estimation and prediction procedures tailored to reduce conservativeness while enhancing feasibility within this unified robust decision framework. The concept of minimum volume confidence set is introduced, which minimizes the size of the outcome confidence set while considering the likelihood of infeasibility, thereby improving the precision of uncertainty characterization. This method also accommodates asymmetric uncertainty by adjusting the confidence set accordingly. Additionally, our framework incorporates an affine predictive model that leverages side information to improve input vector predictions, seamlessly integrating into robust decision models. Our method has been implemented in the algebraic modeling toolbox, RSOME, facilitating practical applications.
Bio: Melvyn Sim is a Professor at the Department of Analytics & Operations, NUS Business school. His research interests fall broadly under the categories of decision making and optimization under uncertainty with applications ranging from finance, supply chain management, healthcare to engineered systems.
DI ZHANG
Abstract: The Progressive Hedging Algorithm (PHA) is a cornerstone in tackling large-scale stochastic programming (SP) challenges. However, its traditional implementation is hindered by several limitations, including the requirement to solve all scenario subproblems in each iteration, reliance on an explicit probability distribution, and a convergence process that is highly sensitive to the choice of penalty parameters. Our project introduces a sampling-based PHA that aims to overcome these limitations. Our approach employs a dynamic selection process for the number of scenario subproblems solved per iteration. It incorporates adaptive sequential sampling for determining sample sizes, stochastic conjugate subgradient methods for direction finding, and a line-search technique to update the dual variables. Experimental results demonstrate that this novel algorithm not only addresses the bottlenecks of the conventional PHA but also potentially surpasses its scalability, representing a substantial improvement in the field of stochastic programming.
Bio: Di is an Applied Scientist at Amazon.com, Inc. Prior to joining Amazon, he was a Ph.D. candidate in the Department of Industrial and Systems Engineering at the University of Southern California (USC), where he conducted research under the guidance of Professor Suvrajeet Sen. His work focuses on tackling challenges in stochastic programming, with a particular emphasis on developing innovative algorithms and analyzing their convergence properties. Recently, he has been working on designing an adaptive sampling progressive hedging algorithm to address large-scale stochastic programming problems.
BIANCA MARIN MORENO
Abstract: To counter the challenge of integrating fluctuating renewables into the grid, devices like thermostatically controlled loads (water-heaters, air conditioners, etc) offer flexible demand. However, efficiently controlling a large population of these devices to track desired consumption signals remains a complex challenge. Existing methods lack convergence guarantees and computational efficiency. This work addresses these drawbacks. We propose modeling this problem as a finite-horizon episodic Markov decision process (MDP) with finite state and action spaces. This approach allows us to formulate the problem as the one of minimizing a convex function of the state-action distribution induced by the policy employed by the devices, a problem also known as convex/concave utility reinforcement learning (CURL). The non-linear nature of CURL invalidates classical Bellman equations, necessitating new algorithms. We introduce a novel Mirror Descent-based algorithm for CURL that offers both theoretical guarantees and computational efficiency. Additionally, we extend the CURL framework to online learning settings, where daily control decisions are made without prior knowledge of consumer behavior and with target profiles that change daily due to energy production fluctuations and inflexible consumption. We propose new solutions for online CURL and discuss how they can be adapted to load control applications.
Bio: Bianca (https://biancammoreno.github.io/) is a third-year PhD student at Inria Grenoble in France, in collaboration with EDF R&D, and is supervised by Pierre Gaillard, Nadia Oudjane, and Margaux Brégère. Her research lies at the intersection of reinforcement learning and online learning algorithms, with a particular focus on convex reinforcement learning and mean-field learning. She is interested in the theoretical aspects of sequential decision-making and its practical applications in energy management scenarios. Currently, Bianca is visiting professor Nicolò Cesa-Bianchi at the University of Milan and professor Marcello Restelli at Politecnico di Milano, working on multi-armed bandit problems.
Abstract: Many real world decision problems are dynamic and affected by uncertainty. Stochastic Programming provides a powerful approach to handle this uncertainty within a multi-period decision framework. However, as the number of stages increases, these problems become challenging to solve due to their rapid growth in complexity. To tackle this, approximation techniques are often used to simplify the original problem, providing useful upper and lower bounds for the objective function’s optimal value.
This tutorial explores several methods for generating bounds in multistage optimization problems under uncertainty. We begin by reviewing bounds based on distribution and function approximations, drawing from established literature, and illustrating their connection to the "generalized moment problem". Next, we discuss bounds based on scenario grouping under the assumption that a sufficiently large scenario tree is given but is unsolvable, both in the context of stochastic programming and distributionally robust optimization. Finally, guaranteed bounds based on the concepts of first order and convex order stochastic dominance are presented. These approximations are versatile, applying to a wide range of problems without requiring specific conditions like linearity, convexity, or stagewise independence. Our tutorial aims to highlight current research gaps and inspire further investigation into promising directions within this field.
Bio: Francesca Maggioni is Professor of Operations Research at the Department of Management, Information and Production Engineering (DIGIP) of the University of Bergamo (Italy). She graduated summa con laude in 2003 in Mathematics at the “Università Cattolica del Sacro Cuore” of Brescia (Italy) and completed her PhD in Pure and Applied Mathematics at University of Milano Bicocca (Italy), in 2006. Her research interests concern both methodological and applicative aspects for optimization under uncertainty. From a methodological point of view, she has developed different types of bounds and approximations for stochastic, robust and distributionally robust multistage optimization problems. She applies these methods to solve problems in logistics, transportation, energy and machine learning. On these topics she has published more than 60 scientific articles featured in peer-reviewed operations research journals. In 2021 her research has been supported as principal investigator by a grant from the Italian Ministry of Education for the project “ULTRA OPTYMAL Urban Logistics and sustainable TRAnsportation: OPtimization under uncertainTY and MAchine Learning”. She currently chairs the EURO Working Group on Stochastic Optimization and the AIRO Thematic Section of Stochastic Programming. She has been secretary and treasurer of the Stochastic Programming Society. She is Associate Editor of the journals Transportation Science, Computational Management Science (CMS), EURO Journal on Computational Optimization (EJCO), TOP, International Transactions in Operational Research (ITOR), Networks and guest editor of several special issues in Operations Research and Applied Mathematics journals.
Abstract: We propose and analyze a new data-driven trade-off (TRO) approach for modeling uncertainty that serves as a middle ground between the optimistic approach, which adopts a distributional belief, and the pessimistic distributionally robust optimization approach, which hedges against distributional ambiguity. We equip the TRO model with a TRO ambiguity set characterized by a size parameter controlling the level of optimism and a shape parameter representing distributional ambiguity. We first show that constructing the TRO ambiguity set using a general star-shaped shape parameter with the empirical distribution as its star center is necessary and sufficient to guarantee the hierarchical structure of the sequence of TRO ambiguity sets. Then, we analyze the properties of the TRO model, including quantifying conservatism, quantifying bias and generalization error, and establishing asymptotic properties. Specifically, we show that the TRO model could generate a spectrum of decisions, ranging from optimistic to conservative decisions. Additionally, we show that it could produce an unbiased estimator of the true optimal value. Furthermore, we establish the almost-sure convergence of the optimal value and the set of optimal solutions of the TRO model to their true counterparts. We exemplify our theoretical results using an inventory control problem and a portfolio optimization problem.
Bio: Dr. Karmel S. Shehadeh is an Assistant Professor in the Department of Industrial and Systems Engineering at Lehigh University. Before joining Lehigh, she was a presidential and dean postdoctoral fellow at Heinz College of Information Systems and Public Policy at Carnegie Mellon University. She holds a doctoral degree in Industrial and Operations Engineering from the University of Michigan, a master's degree in Systems Science and Industrial Engineering from Binghamton University, and a bachelor's in Biomedical Engineering from Jordan University of Science and Technology. Her research program aims to push the boundaries of stochastic and (distributionally) robust optimization methodologies and their applications. She has a track record of designing, analyzing, and implementing innovative, computationally efficient, and implementable stochastic and robust optimization methodologies for real-world problems in various application domains, including healthcare, facility location, and transportation. Another stream of her research focuses on designing fairness-promoting optimization frameworks. In addition, she serves as the task lead for the Marine Energy Supply Chain Optimization at the Atlantic Marine Energy Center (sponsored by the U.S. Department of Energy). Professional recognition of her work includes the 2024 INFORMS Minority Issues Forum (MIF) Early Career Award (honorable mention), the 2022 MIF Paper Award (winner), a Junior Faculty Interest Group (JFIG) Paper Prize, and a Service Science Best Cluster Paper Award. She is an Associate Editor for the INFORMS Journal on Computing, Transportation Science, and TR-C. She serves as the President of JFIG and Vice President of SOLA. She also has a track record of proactively leading and supporting DEI initiatives. Her efforts in these areas have been recognized with the 2024 Lehigh ISE DEI Award and the 2023 Lehigh Alfred Noble Robinson Faculty Award. She has been named a finalist for the 2024 INFORMS JFIG Teaching Excellence Award.
Abstract: We consider a two-stage stochastic decision problem where the decision-maker has the opportunity to obtain information about the distribution of the random variables X through a set of discrete actions that we refer to as probing. Specifically, probing allows the decision-maker to observe components of a random vector Y that is jointly-distributed with X. We propose a three-stage optimization model for this problem, wherein the first-stage variables select components of Y to observe, and decisions in subsequent stages must be consistent with the obtained information. In the case that X and Y have finite support, Goel and Grossmann gave a mixed-integer programming formulation of this problem whose size is proportional to the square of cardinality of the sample space of the random variables. We propose to solve the model using bounds obtained from an information-based relaxation, combined with a branching scheme that enforces the consistency of decisions with observed information. The branch-and-bound approach can naturally be combined with sampling in order to estimate both lower and upper bounds on the optimal solution value. We demonstrate the scalability of our approach against the exact MIP formulation on instances of a stochastic facility location problem.
Bio: Jeff Linderoth is the Harvey D. Spangler Professor in the department of Industrial and Systems Engineering at the University of Wisconsin-Madison and served as the department chairperson from 2016-2021. Prof. Linderoth holds a courtesy appointment in the Computer Sciences department and as a Discovery Fellow at the Wisconsin Institutes of Discovery. Dr. Linderoth received his Ph.D. degree from the Georgia Institute of Technology in 1998. He was previously employed in the Mathematics and Computer Science Division at Argonne National Laboratory, with the optimization-based financial products firm of Axioma, and as an Assistant Professor at Lehigh University. His awards include an Early Career Award from the Department of Energy, the SIAM Activity Group on Optimization Prize, and the INFORMS Computing Society (ICS) Prize. He 2016, he was elected to membership as an INFORMS Fellow. In 2010, in the proudest moment of his academic career, he won the Stochastic Programming Tennis Championship.
Abstract: This presentation explains the Expectile, Conditional Value-at-Risk (CVaR), Value-at-Risk (VaR) and other risk measures in the framework of the Risk Quadrangle (RQ) theory. Every Quadrangle includes four stochastic functions: Error, Regret, Risk, and Deviation. These functions are interconnected through a Statistic function. RQ links optimization, risk management, and statistical estimation in one theory. We investigate Support Vector Regression (SVR) with RQ theory. Epsilon-SVR and nu-SVR are reduced to minimization of Vapnik error and Conditional Value-at-Risk (CVaR) norm, accordingly. The Vapnik error and CVaR norm define quadrangles with Statistic equal to an average of two symmetric quantiles. Therefore, epsilon-SVR and nu-SVR are equivalent asymptotically unbiased estimators of an average of two symmetric conditional quantiles. We present also a new Expectile Quadrangle based on Piecewise Linear Error, which is an alternative to the standard Asymmetric Variance Error. We proved equivalence of Expectile regression with these two Errors as well as equivalence of Expectile and Quantile regression. We study also the f-divergence primal and dual quadrangles. The framework provides an interpretation of portfolio optimization (e.g., Markowitz optimization) and regression (e.q., LS regression) as a robust optimization. Theoretical findings are validated with several case studies. (Joint work with Anton Malandii and Victor Kuzmenko)
Bio: Stan Uryasev is Professor and Frey Family Endowed Chair of Quantitative Finance at the Stony Brook University. His research is focused on efficient computer modeling and optimization techniques and their applications in finance. He published four books and more than 150 research papers. He is a co-inventor of the Conditional Value-at-Risk and the Conditional Drawdown-at-Risk optimization methodologies. He developed optimization software in risk management area, including Drawdown and Credit Risk minimization. His joint paper with Prof. Rockafellar on Optimization of Conditional Value-At-Risk in The Journal of Risk, Vol. 2, No. 3, 2000 is among the 100 most cited papers in Finance. Many risk management/optimization packages implemented the approach suggested in this paper (MATLAB implemented a toolbox). He is on the editorial board of a number of research journals and is Editor Emeritus and Chairman of the Editorial Board of the Journal of Risk.
Abstract: The utilization of data is becoming paramount in addressing modern optimization problems characterized by increasing complexity. In this talk, I’ll present my vision of key aspects relating to data-driven optimization: (i) should data be used to reconstruct the unknown distribution of uncertainty, or should we rather directly optimize to achieve the final design? (ii) is it possible to endow data-driven designs with certifications of quality that hold beyond any assumed characteristics of the underlying distribution? In the second part of the talk, I will introduce "Scenario Optimization", a collection of data-driven optimization techniques that can provide algorithmic and theoretical responses to the questions posed earlier.
Bio: Marco C. Campi is Professor of Control and Data-Driven Methods at the University of Brescia, Italy. He has held visiting and teaching appointments with Australian, USA, Japanese, Indian, Turkish and various European universities, besides the NASA Langley Research Center in Hampton, Virginia, USA. He has served as Chair of multiple IFAC Technical Committees and has been in diverse capacities on the Editorial Board of top-tier control and optimization journals. In 2008, he received the IEEE CSS George S. Axelby award for the article "The Scenario Approach to Robust Control Design". He has delivered plenary and semi-plenary addresses at major conferences including OPTIMIZATION, SYSID, and CDC, besides presenting various talks as IEEE distinguished lecturers for the Control Systems Society. Marco C. Campi is a Fellow of IEEE and a Fellow of IFAC. His interests include: stochastic optimization, inductive methods, data-driven decision-making and the fundamentals of probability theory.
April 26, 2024, 10am-11am ET
RENATA PEDRINI
Abstract: The long-term generation scheduling (LTGS) problem presented important developments over the years. In this work, we analyze different risk aversion strategies for handling the impact of climate change in the LTGS problem in hydro-dominated power systems using stochastic dual dynamic programming (SDDP). Despite the advances in modeling and solution strategy, there are important issues related to the distributions of inflows (and other random variables), especially with climate change. First, even though time series or other statistical methods are used to model these random parameters, the true probability distribution is never fully known. Furthermore, historical values used to devise the statistical models may no longer be valid as processes shift. We focus on solutions to mitigate these issues protecting the system from deviations from the inflow scenario distribution, achieved by a Distributionally Robust Optimization (DRO) framework. Instead of a single distribution, DRO considers all distributions that are sufficiently close to this nominal distribution and optimizes a worst-case expected (or risk-averse) objective, where the expectations concern all the considered distributions. DRO is more realistic because it explicitly considers existing data while recognizing that forecasts may contain errors. The DRO policies are tested against risk-neutral (expected value minimization) and CVaR risk-averse approaches using data from the Brazilian power system. Policies are compared as well as the practical application of different algorithms. The results indicate that incorporating DRO improves the out-of-sample performance of policies.
(Authors: R. Pedrini, G. Bayraksan, E. C. Finardi, F. Beltrán)
Bio: Renata is a fifth-year Ph.D. candidate in Electrical Engineering at Brazil's Federal University of Santa Catarina (UFSC), where she works under the guidance of Prof. Erlon Finardi. Her research is centered on addressing challenges in power system operation and planning, with a particular emphasis on leveraging innovative algorithms and methodologies. Recently, Renata completed a 10-month collaboration with Prof. Guzin Bayraksan at The Ohio State University, focusing on developing strategies for power system operation in response to the impacts of climate change. Her collaborative efforts extend beyond academia, as she actively collaborates with energy companies to gain insights into the real-world challenges faced by the power system. She currently integrates CEPEL, the Center for Electrical Energy Research in Brazil working with inflow scenario generation.
ARAS SELVI
Abstract: The recent advent of data-driven and end-to-end decision-making across different areas of operations management has led to an ever closer integration of prediction models from machine learning and optimization models from operations research. A key challenge in this context is the presence of estimation errors in the prediction models, which tend to be amplified by the subsequent optimization model – a phenomenon that is often referred to as the Optimizer's Curse or the Error-Maximization Effect of Optimization. A contemporary approach to combat such estimation errors is offered by distributionally robust problem formulations that consider all data-generating distributions close to the empirical distribution derived from historical samples, where 'closeness' is determined by the Wasserstein distance. While those techniques show significant promise in problems where all input features are continuous, they scale exponentially when binary and/or categorical features are present. This work demonstrates that such mixed-feature problems can indeed be solved in polynomial time. We present a practically efficient algorithm to solve mixed-feature problems, and we compare our method against alternative techniques both theoretically and empirically on standard benchmark instances.
(Authors: Reza Belbasi, Aras Selvi, Wolfram Wiesemann)
Bio: Aras (https://www.arasselvi.com/) is a doctoral candidate at Imperial College Business School, supervised by Professor Wolfram Wiesemann. He is affiliated with the Computational Optimization Group and the Data Science Institute of Imperial College London and has recently completed PhD research internships at The Alan Turing Institute and J.P. Morgan AI research. His research interests are the theory of data-driven decision making under uncertainty and its applications in machine learning, privacy, and fairness. In his recent works, he has been working on designing optimal privacy mechanisms, developing efficient algorithms for robust machine learning, as well as approximating hard decision making problems via robust optimization.
Abstract: We study a stochastic combinatorial optimization problem arising from applications in industrial engineering and manufacturing: the uncapacitated single-item lot-sizing problem under uncertain demand and costs. This problem consists in determining the timing and quantity of production of a product on a machine so as to satisfy the customer demand while minimizing the costs for the manufacturer.
The problem is modeled as a multi-stage stochastic mixed-integer linear program in which the evolution of the uncertain parameters is represented by a scenario tree. To solve it, we propose a new extension of the stochastic dual dynamic integer programming algorithm (SDDiP). This extension aims at being more computationally efficient in the management of the expected cost-to-go functions involved in the model, in particular by reducing their number and by exploiting the current knowledge on the polyhedral structure of the stochastic uncapacitated lot-sizing problem. The algorithm is based on a partial decomposition of the problem into a set of stochastic subproblems, each one involving a subset of nodes forming a sub-tree of the initial scenario tree. We then introduce a cutting-plane generation procedure that iteratively strengthens the linear relaxation of these sub-problems and enables the generation of additional strengthened Benders' cut, which improves the convergence of the method. Our numerical results carried out on randomly generated large-size instances show that the proposed algorithm significantly outperforms the SDDiP algorithm.
Bio: Céline Gicquel is an Associate Professor in Operations Research at the Université Paris Saclay in France. She obtained a PhD in Industrial Engineering from the Ecole Centrale Paris (France) in 2008 and an accreditation to supervise research from the Université Paris Saclay (France) in 2021. Her research interests involve modeling and solving combinatorial optimization problems coming from applications in manufacturing, energy and transportation. She worked among others on the development of stochastic integer programming approaches for lot-sizing and facility location problems.
Abstract: Multistage stochastic programming is a powerful tool allowing decision-makers to revise their decisions at each stage based on the realized uncertainty. However, in various practical settings, organizations are not able to be fully flexible, as decisions cannot be revised too frequently due to their high organizational impact. Consequently, decision commitment becomes crucial to ensure that initially made decisions remain unchanged for a certain period. To this end, in this talk, we introduce partially adaptive multistage stochastic programming approaches that strike an optimal balance between decision flexibility and commitment by determining the best stages to revise decisions depending on the allowed level of flexibility. We introduce a novel mathematical formulation and theoretical properties eliminating certain constraint sets. Furthermore, we develop a decomposition method that effectively handles mixed-integer adaptive multistage programs by adapting the integer L-shaped method and Benders decomposition. We further provide analytical results over stylized problems depending on the revision time. Computational experiments on stochastic lot-sizing and generation expansion planning problems show substantial advantages attained through optimal selections of revision times when flexibility is limited, while demonstrating computational efficiency of the proposed properties and solution methodology. Optimizing revision times in a less flexible case can outperform arbitrary selection in a more flexible case, achieving performance levels comparable to fully flexible settings.
Bio: Beste Basciftci is an Assistant Professor at the Department of Business Analytics at the Tippie College of Business at the University of Iowa. She obtained her PhD degree in Operations Research from the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology, with a minor in Statistics. She received her bachelor's degrees in industrial engineering and computer engineering from Bogazici University with High Honors. She also holds a master's degree in industrial engineering from Bogazici University. She is broadly interested in data-driven decision-making problems under uncertainty. Methodologically, her research focuses on developing mixed-integer, stochastic programming and distributionally robust optimization approaches to address modelling and computational challenges in operations research and management related problems. The application areas of these problems mainly involve i) energy systems and sustainability, ii) supply chains and facility location problems, and iii) smart city operations (such as sharing systems and emerging transportation systems). She is also interested in game theoretic frameworks where the decisions of multiple entities can be considered as part of these optimization approaches. She is honored to receive prestigious awards with her research, including the IISE Transactions Best Paper Award in Focus Issue of Operations Engineering & Analytics, INFORMS ENRE (Energy, Natural Resources and the Environment Section) Best Student Paper Award, University of Iowa Early Career Scholar Award, and Georgia Tech ISyE Alice and John Jarvis Research Award.
Abstract: In this talk we discuss our experience with an ongoing project which aims at building stochastic optimization models for the transition to clean energy in Chile. Many challenges arise in this type of project, from collecting data to properly modeling uncertainty, in addition of solving the problems efficiently and validating the models. We will discuss some of these challenges and how we have approached them.
Bio: Tito Homem-de-Mello is a Professor in the School of Business at Universidad Adolfo Ibañez, Santiago, Chile. He obtained his Ph.D. in Industrial and Systems Engineering from Georgia Institute of Technology, an M.Sc. in Applied Mathematics from the same institution, and a B.Sc. degree in Computer Science from University of São Paulo, Brazil. His research focuses on optimization of systems under uncertainty. In particular, he studies theory and algorithms for stochastic optimization as well as applications of such methods in several areas such as energy, finance, and transportation. Dr. Homem-de-Mello was co-chair of the Program Committee of the XIV International Conference on Stochastic Programming, held in Brazil in 2016, and served for three years as a member of COSP, a steering committee for the stochastic optimization community. He is currently an Associate Editor for Computational Optimization and Applications.
Abstract: In this talk, we will discuss multi-stage stochastic programming (MSP) models and solution approaches for humanitarian relief logistics planning in natural disasters such as hurricanes. We consider logistics decision-making such as the relief item prepositioning, shelter planning, and contingency modality selection over multiple periods prior to the landfall of an impending hurricane. Using stochastic forecast information about the hurricane's attributes over time, we propose MSP models which provide optimal adaptive logistics decision policies. Our preliminary numerical results and sensitivity analyses demonstrate the value of MSP for hurricane relief logistics planning, as well as the trade-offs between policy flexibility, solution quality, and computational effort.
Bio: Dr. Yongjia Song is an associate professor in the Department of Industrial Engineering at Clemson University. Dr. Song received his Ph.D. degree in industrial and systems engineering from University of Wisconsin-Madison in 2013. Dr. Song’s research interests include optimization under uncertainty, integer programming, and applications of optimization in transportation and logistics, networks, and health care. Dr. Song is a recipient of the NSF CAREER award in 2021, and his research has been supported by several federal funding agencies, such as the NSF, DOE, ONR, among others.
Abstract: Bilevel is a powerful tool for modeling hierarchical decision making processes between interdependent decision makers. Compared to single-level optimization models under data uncertainty, bilevel programs further involve decision uncertainty, where the upper-level decision maker may not be certain about the lower-level decision maker’s reaction when they have multiple alternative optimal solutions. In this talk, we present recent results of bilevel programming models with uncertainty under ambiguously known distribution considering (i) a pessimistic or (ii) an optimistic upper-level decision maker whose decision is binary. For (i), we study a pessimistic bilevel program with moment information of uncertainty, construct upper bounds based on 0-1 semidefinite programming (SDP) approximations and develop cutting plane algorithms to solve 0-1 SDPs. We conduct numerical studies to demonstrate the effectiveness and efficiency on a facility location problem. For (ii), we consider an optimistic bilevel program using Wasserstein metrics. We show that favorable data-driven properties including out-of-sample guarantee, asymptotic consistency, and tractability are still preserved in the bilevel setting. A case study of a demand response problem on a 33-bus distribution network is conducted, and our results draw attention to the need of considering equity constraints with heterogeneous households.
Bio: Yiling Zhang is an Assistant Professor in the Department of Industrial and Systems Engineering at the University of Minnesota. She received her Ph.D. in Industrial and Operations Engineering from the University of Michigan. Her research interests include stochastic programming, integer programming, and nonlinear programming. Her research has applications to various complex service systems, including transportation systems, power systems, and healthcare operations.
Abstract: In ride-hailing systems, pickup time refers to the time that elapses from the moment a vehicle is dispatched to pick up a rider until the rider is picked up. A fundamental phenomenon in ride-hailing systems is that there is a trade-off between pickup time and the time that a vehicle waits for a dispatch. In short, if vehicles spend little time idle waiting for a dispatch, then few vehicles are available when a rider makes a request, and thus the mean distance between a rider and the closest available vehicle is long, which means that pickup time is long.
This phenomenon is of great importance in ride-hailing. In spite of this, the existing literature on price optimization for ride-hailing, and on repositioning optimization for ride-hailing, ignores pickup time. We consider a Markov Decision Process formulation of the problem of making pricing decisions when a potential customer requests a ride, as well as vehicle repositioning decisions when a rider is dropped off. The Markov Decision Process models the trade-off between pickup time and the time that a vehicle waits for a dispatch. The Markov Decision Process is intractable, and therefore we consider the associated fluid optimization problem. Typical fluid optimization problems replace random variables with their means. In this case, such an approach results in an intractable fluid optimization problem. Instead, we consider a finite mixture approximation of the pickup time distribution, that results in a tractable conic optimization problem.
The usual approach when using fluid optimization problems is to use its optimal solution to control the original stochastic process. We show that in this case the fluid system may have two stable fixed points, one that is optimal and one that is suboptimal (as well as an unstable fixed point). The consequence is that the usual approach may result in poor performance, with the stochastic system spending too much time close to the suboptimal fixed point. We show how the fluid optimal solution can be used to construct policies that avoid this trap, and that perform much better in simulations than the policies proposed in previous papers.
Bio: Anton Kleywegt is a faculty member in the Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology. His research interests include stochastic optimization, with applications in transportation and revenue management.
Abstract: The presentation will begin by describing smooth bootstrap applied to confidence intervals for the optimality gap associated with a solution to a stochastic program. This extends work where stochastic programming is used based on a sample from an unknown probability distribution. The second part of the talk will describe a software architecture for computing solutions with upper and lower bounds in an asynchronous parallel environment.
Bio: Prof. David Woodruff's research concerns computational aspects of optimal decision making. He is particularly interested in problems with a mix of discrete and continuous choices with multiple time stages when there is significant uncertainty. His research includes solution algorithms, problem representation and modeling language support. He has worked on applications in operations, logistics, science, and has been involved recently in a number of applications in electrical energy planning and scheduling. From 2013 to 2019 he was editor-in-chief of the INFORMS Journal on Computing, which is a publication of the Institute for Operations Research and Management Science.