Date: May 12, 2025
Speaker:
University of Chicago
This talk was not recorded at the request of the speaker.
Each year, nearly 13,000 eviction orders are issued in Cook County, Illinois. While most of these orders have an enforcement deadline, a portion does not. The Cook County Sheriff’s Office (CCSO) is responsible for enforcing these orders, which involves selecting the orders to prioritize and planning daily enforcement routes. This task presents a challenge: balancing “equity” (i.e., prioritizing orders that have been waiting longer) with “efficiency” (i.e., maximizing the number of orders served). Although the current CCSO policy is highly efficient, a significant fraction of eviction orders miss their deadline. Motivated by the CCSO’s operations, we study a model of eviction enforcement planning and propose a policy that dynamically prioritizes orders based on their type (deadline or no deadline), location, and waiting time. Our approach employs a budgeted prize-collecting vehicle routing problem (VRP) for daily planning, where the “prizes” are determined by solving a stochastic control problem. This stochastic control problem, which relies on the VRP for determining feasible actions at each decision point, is high-dimensional due to its spatial nature, leading to the curse of dimensionality. We overcome this challenge by building on recent advances in high-dimensional stochastic control using deep neural networks. We compare the performance of our proposed policy with two practical benchmark policies, including one that mimics the current CCSO policy, using data from CCSO. Similar to the CCSO policy, our proposed policy leads to efficient resource utilization, but it also reduces the percentage of orders that miss their deadline by 72.38% without degrading the overall service effort for either type of orders. In a counterfactual study, we show that increasing the service capacity or extending the enforcement deadline further reduces the fraction of orders missing their deadline.
Baris Ata is the Sigmund E. Edelstone Distinguished Service Professor of Operations Management at The University of Chicago Booth School of Business. He received the B.S. degree in Industrial Engineering from Bilkent University in 1997, and MS degrees in Engineering-Economic Systems and Operations Research (1999), Business Research (2000), Mathematics (2001), and Statistics (2002) and a Ph.D. in Operations, Information, and Technology (2003) from Stanford University. He takes a problem-driven approach to bridge theory and practice in operations management, focusing on dynamic decision-making in complex settings under uncertainty. On the theoretical side, Ata’s current research interests include solving high-dimensional stochastic control problems. On the application side, he works on identifying suitable candidates for xenotransplantation, improving operations in the criminal justice system, and addressing logistical challenges in last-mile delivery in Africa. Ata’s research has earned several awards, including the Best Paper in Service Science Award from INFORMS (2009), the William Pierskalla Best Paper Award from INFORMS (2015), and the Wickham Skinner Best Paper Award from POMS (2019). He is also a recipient of the Manufacturing and Service Operations Management Young Scholar Prize from INFORMS (2015) and the Emory Williams (schoolwide) MBA Teaching Award at Chicago Booth (2021). He currently serves as a Department Editor for Management Science. He has served an Associate Editor for Operations Research, Management Science, Mathematics of Operations Research, Manufacturing & Service Operations Management, and Stochastic Systems.
Date: April 28, 2025
Speaker:
Columbia University
Sunk-cost bias occurs when decisions are influenced by the time, energy, and money already invested, rather than considering the future costs necessary to achieve success. This phenomenon of `irrational behavior' is well-documented in decision-making studies and is generally recognized as a factor that can lead to suboptimal decisions. In this work, we investigate how sunk cost (and the behavioral bias associated with it) can be used as an operational lever to increase service completion rates in a congested service system. We run a controlled online experiment and find that the abandonment rate is significantly reduced for the group of participants who incur a larger sunk cost. To better capture the dynamics of service systems and their impact on customers' behavior, we study a queueing model with sunk cost and strategic customers, where customers experience a disutility of balking that is proportional to the sunk cost they incur. We characterize the equilibrium behavior of the customers, from which we further derive the optimal strategy for the service provider in terms of whether to provide real-time queue length information to customers as well as the optimal level of sunk cost to impose. Our results show that the sunk cost strategy is effective only when waiting information is provided and that using a non-zero sunk cost is optimal when the queueing system is moderately congested. Through a comprehensive numerical study, we demonstrate that implementing a non-zero sunk cost can substantially improve the throughput of the system. In addition, we reveal an interesting asymmetric pattern in the robustness of the service provider's optimal policy when the customers' sensitivity to sunk cost cannot be accurately estimated, which suggests that if the service provider cannot accurately estimate the customer’s sensitivity to sunk cost, using an underestimated value will give more robust performance improvements. Based on joint work with Jimmy Qin and Jing Dong.
Carri W. Chan is the John A Howard Professor of Business in the Decision, Risk and Operations Division and the Faculty Director of the Healthcare and Pharmaceutical Management Program at Columbia Business School. Her research is in the area of healthcare operations management. Her primary focus is in data-driven modeling of complex stochastic systems, efficient algorithmic design for queuing systems, dynamic control of stochastic processing systems, and econometric analysis of healthcare systems. Her research combines empirical and stochastic modeling to develop evidence-based approaches to improve patient flow through hospitals. She has worked with clinicians and administrators in numerous hospital systems including Geisinger Health, Northern California Kaiser Permanente, New York Presbyterian, and Montefiore Medical Center. She is the recipient of a 2014 National Science Foundation (NSF) Faculty Early Career Development Program (CAREER) award, the 2016 Production and Operations Management Society (POMS) Wickham Skinner Early Career Award, and the 2019 MSOM Young Scholar Prize. She currently serves as a co-Department Editor for the Healthcare Management Department at Management Science. She received her BS in Electrical Engineering from MIT and MS and PhD in Electrical Engineering from Stanford University.
Date: April 14, 2025
Speaker:
Northwestern University
Diffusion models have achieved state-of-the-art performance in generative modeling, with their power significantly enhanced by fine-tuning toward task-specific objectives. However, existing fine-tuning methods often lack a principled foundation and offer limited performance guarantees. In this talk, we present a mathematical framework for understanding guidance-based diffusion model fine-tuning, providing a systematic perspective on its optimization properties and algorithmic design.
We abstract task-specific objectives as a reward function and fine-tuned diffusion models aim to maximize the reward by generating solutions. In the offline setting, we show that guidance enables high-quality sample generation, achieving optimality akin to off-policy bandit algorithms. In the online setting with real-time feedback, we establish a strong connection between guided diffusion and optimization. Specifically, gradient guidance-based diffusion effectively samples solutions to a regularized optimization problem, where the regularization arises from logged data. As for guidance design, directly bringing in the gradient of the reward function as guidance would jeopardize the structure in generated samples. We investigate a modified form of gradient guidance based on a forward prediction loss, which provably preserves the latent structure. We further consider an iteratively fine-tuned version of gradient-guided diffusion where guidance and score network are both updated with newly generated samples. This process mimics a first-order optimization iteration in expectation, for which we prove O(1/K) convergence rate to the global optimum when the objective function is convex.
Minshuo Chen is an assistant professor with the Department of Industrial Engineering & Management Sciences at Northwestern University. He was an associate research scholar with the Department of Electrical and Computer Engineering at Princeton University from 2022 to 2024. He completed his Ph.D. from the School of Industrial and Systems Engineering at Georgia Tech, majoring in Machine Learning. His research focuses on developing principled methodologies and theoretical foundations of deep learning, with a particular interest in 1) generative models including diffusion models, 2) foundations of machine learning, such as optimization and sample efficiency, and 3) reinforcement learning.
Date: March 31, 2025
Speaker:
Purdue University
Instrumentation and large-scale data-collection in logistics, service and manufacturing operations is ubiquitous today. These complex systems exhibit nonstationarities and high variability, and the topic of this talk arises from the need to develop a theory and methodology for data-driven stochastic modeling of these systems. Multi-scale point processes are widely used for modeling discrete-event systems exhibiting nonstationarities and high variability. The path measure induced by a multi-scale point process can be represented as an infinite mixture model. The doubly stochastic Poisson (DSPP) or Cox process serves as a prime example in this talk, wherein the path measure of the Cox process is represented as an infinite mixture of a Poisson process path measure and a ‘mixing’ path measure over an underlying stochastic intensity process. Statistical inference for these models is challenging due to the need to estimate the mixing path measure using only point process observations. In other words, the underlying intensity is unobserved. This challenge is known as nonparametric maximum likelihood estimation (NPMLE) in the statistical literature. While much of the existing statistical literature on NPMLE focuses on mixing measures supported on finite-dimensional subspaces, multi-scale processes involve mixing measures supported on infinite-dimensional or path spaces. Within the applied probability literature, there has been considerable work focused on Cox processes. Most common methods for statistical estimation of a Cox process use an approximating time-discretized stochastic model. While this results in a parametric statistical model that is relatively easy to estimate, the estimated model pre-commits the modeler to a specific time-scale, and loses much of the rich multi-scale structure of the point process, resulting in poor inference.
In this talk, I will present our on-going work to solve the NPMLE problem alluded to before, while avoiding the aforementioned time-scale commitment. Our approach is to develop a deep generative modeling methodology for multi-scale point process models. Our method parameterizes the mixing path measures by deep neural networks, and trains these deep neural networks using variational inference. Variational inference optimizes a lower bound to the NPMLE objective, and training essentially tightens the lower bound by maximizing it over an auxiliary space of regular conditional probability (RCP) path measures, conditioned on the sample paths of the point process. We parameterize these auxiliary path measures by deep neural networks as well. For example, in the case of a Cox process with stochastic intensity modeled as a (non-autonomous) stochastic differential equation (SDE), the auxiliary space consists precisely of those path measures induced by a stochastic smoothing SDE, and the path measures are parameterized by imbedding neural networks into the drift function of the stochastic smoothing SDE. By simultaneously maximizing the variational lower bound over the (parameterized) mixing path measures and the (parameterized) auxiliary path measures, we approximately solve the NPMLE problem. For reasons that will become apparent, in the case of Cox processes we title this the score-matched NPMLE. I will illustrate the methodology through Cox process examples, as well as run-through experiments on infinite server queues. On the one hand, I will also present some recent results on providing probabilistic guarantees for score-matched NPMLE. On the other hand, much of the statistical theory associated with deep generative modeling using variational inference remains open (even in finite dimensional settings), and there are a number of challenging open problems to solve in the context of score-matched NPMLE as well. Time-permitting, I will discuss some of these challenges as well.
Harsha Honnappa is an Associate Professor in the Edwardson School of Industrial Engineering at Purdue University, where he directs the Stochastic Systems Lab. He is an applied probabilist with interests in the modeling and analysis of stochastic systems, theoretical statistics, stochastic optimization and control. His research is supported by multiple grants from the National Science Foundation, including an NSF CAREER award, the Office of Naval Research, the Purdue Research Foundation, and through the Edwardson School of Industrial Engineering’s Frontiers awards. He is currently Area Editor for Stochastic Models for Operations Research Letters, and Associate Editor at Operations Research and Queueing Systems Journals.
Date: March 17, 2025
Speaker:
Cornell University
There has been a growing interest in studying online decision-making problems such as NRM under more general correlation structures, motivated by the complex data sets, generative models, and high-variance phenomena driving modern applications. Although there has been interesting recent progress on our understanding of such NRM models, these works either posit a particular structure for the possible correlations, or have a complexity scaling with the number of Markovian “states of the world" driving the correlations (which may be exponentially large in complex models).
We make progress on both these fronts by showing that the NRM problem (suitably normalized) has an additive Polynomial Time Approximation Scheme (with runtime independent of the number of “states of the world", and scaling at worst polynomially in the time horizon and other relevant parameters) under a completely general model of correlations, assuming access to a natural Monte Carlo simulator for the demand dynamics, and that the maximum number of resources used by a product is uniformly bounded.
Our result follows by observing that NRM may be viewed as a multistage stochastic integer program, just like optimal stopping, and showing that the phenomena recently proven by Chen and Goldberg regarding existence of a PTAS for optimal stopping extends to a much broader family of multistage stochastic problems. At the heart of this extension is a new way to conceptualize and implement stochastic (sub)gradient methods ``on-the-fly" for the massive linear programs (on scenario trees) associated with natural relaxations of these problems, based on the simple idea that to implement a policy one needs only compute the relevant values along the sample path presented to you. The poly(1/epsilon) number of iterations of (sub)gradient descent required to achieve epsilon-accurate solutions "translates" to computing (noisy estimates of) nested conditional expectations with poly(1/epsilon) levels of nesting, which can be implemented recursively in polynomial time for any given sample path using the simulator (independent of the size of the overall scenario tree).
In contrast to several past works on PTAS for multi-stage stochastic programs, our methods have a runtime at worst polynomial in the time horizon. Furthermore, when the number of resources is held fixed, the runtime required by our algorithms to make each accept/reject decision can be made essentially independent of the time horizon altogether (by approximating certain sums which appear in subgradients using an extra layer of sampling). In addition, we derive new algorithms and insights for several online combinatorial optimization problems with general correlation structure, as well as for optimal stopping itself. Joint work with Sabri Cetin and Yilun Chen.
David A. Goldberg is an Associate Professor in Cornell's ORIE department. Previously, he was the A. Russell Chandler III Associate Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. He received his Ph.D. in Operations Research at MIT in 2011, and his B.S. in Computer Science (minors in Operations Research and Applied Math) from Columbia University in 2006. Goldberg’s research is in applied probability, on topics including optimal stopping and control, the theory of inventories and queues, and applied Operations Research. His work has been recognized with accolades including an NSF CAREER award, 2019 INFORMS Applied Probability Society Best Publication award, 2019 INFORMS Nicholson Competition first place, 2018 INFORMS Finance Student Paper Competition finalist, 2015 INFORMS Nicholson Competition first place, 2015 INFORMS JFIG Competition second place, and 2014 MSOM and 2010 INFORMS Nicholson Competitions finalist. He is also an associate editor for the journals Stochastic Systems and Stochastic Models, former associate editor of the journals Operations Research and Queueing Systems, and former chair of the INFORMS Applied Probability Society. In addition, Goldberg has recently led an initiative to engage more undergraduates in OR/analytics research at Cornell (while serving as Director for Undergraduate Studies in ORIE), for which he was awarded both the 2023 Sonny Yau ’72 Teaching Award of the College of Engineering and the 2025 Community-Engaged Practice and Innovation Award of the Einhorn Center for Community Engagement.
Date: March 3, 2025
Speaker:
Boston University
Decentralized dynamic matching markets arise in numerous real-world applications, such as multi-hospital kidney exchange and job matching platforms. A key challenge in these markets is incentivizing self-interested agents to fully participate in a shared matching platform rather than withholding resources for their own gain. Our work studies incentive mechanisms that align individual agents' interests with socially optimal behavior in decentralized matching markets. We consider a model with multiple self-interested agents, each managing a local multi-way dynamic matching problem, where jobs of different types arrive stochastically and remain available for a limited time. Each agent holds private information about their job arrivals and actions and seeks to maximize their long-run average reward.
We first analyze a baseline Marginal-Value (MV) mechanism, which provides incentives based on the marginal value of submitted items. While MV ensures approximate incentive alignment, we show that it fails to entirely eliminate agents’ incentives to withhold jobs due to market frictions, limiting its predictive power. To address this, we introduce two enhanced mechanisms—Marginal-Value-plus-Priority (MVP) and Marginal-Value-plus-Credit (MVC)—and use mean-field equilibrium analysis to prove that they fully eliminate the incentive to withhold jobs in finitely large markets in mean-field/oblivious equilibrium. These mechanisms introduce carefully designed state-dependent rewards. We complement our theoretical analysis with numerical experiments using date based on kidney exchange. Our simulations demonstrate that the proposed mechanisms perform well in markets with realistic sizes.
Pengyu Qian is an Assistant Professor in the Operations & Technology Management department at the Questrom School of Business, Boston University. His research studies the design and analysis of marketplaces in dynamic settings, using tools from probability, optimization and game theory. He is interested in foundational models driven by challenges in sharing economy and the allocation of public resources. His research emphasizes algorithms/mechanisms that not only have good theoretical guarantees, but also are simple, robust, and hence practical for real-world systems. Pengyu is a recipient of the INFORMS JFIG Best Paper Prize. He earned his Ph.D. from Columbia Business School and his B.S. from Peking University.
Date: February 17, 2025
Speaker:
MIT
In this talk, I will discuss learning in principal-agent games where there is information asymmetry between what the principal and the agent know about each other’s chosen actions. I will introduce a generalization of the standard Stackelberg Games (SGs) framework: Calibrated Stackelberg Games (CSGs). In CSGs, a principal repeatedly interacts with an agent who (contrary to standard SGs) does not have direct access to the principal’s action but instead best-responds to calibrated forecasts about it. I will show that in CSGs, the principal can achieve utility that converges to the optimum Stackelberg value of the game (i.e., the value that they could achieve had the agent known the principal’s strategy all along) both in finite and continuous settings, and that no higher utility is achievable. Finally, I will discuss a meta-question: when learning in strategic environments, can agents overcome uncertainty about their preferences to achieve outcomes they could have achieved absent any uncertainty? And can they do this solely through interactions with each other?
Chara Podimata is a 1942 Career Development Assistant Professor of Operations Research and Statistics at MIT. Her research focuses on incentive-aware ML and more broadly on social computing both from a theoretical and a practical standpoint. Her research is supported by Amazon and the MacArthur foundation through an x-grant. She got her PhD from Harvard. In her free time, she runs and spends time with her pup, Terra.