WEBINAR Schedule

March, 2024 [registration link]

Northeastern University

Cornell Tech

From Insights to Action: How Industrial Engineering Can Aid Anti-Human Trafficking Efforts

Human trafficking is a prevalent and malicious global human rights issue, with an estimated 24 million victims exploited worldwide. A major challenge to its disruption is the fact that human trafficking is a complex system interwoven with other illegal and legal networks, both cyber and physical. Efforts to disrupt human trafficking must understand these complexities and the ways in which a disruption to one portion of the network affects other network components. As such, industrial engineering models are uniquely positioned to address the challenges facing anti-human trafficking efforts. This presentation will discuss multiple anti-human trafficking research efforts focused on prevention, network disruption, and survivor empowerment related to effectively allocating limited resources to disrupt human trafficking networks, increasing survivors’ access to services, and assessing the efficacy of coordination among anti-human trafficking stakeholders. We will discuss how a variety of industrial engineering methodologies can be used in such contexts and how a transdisciplinary community-based participatory approach can move the anti-trafficking field forward.

Bio: Dr. Kayse Lee Maass is an Assistant Professor of Industrial Engineering and leads the Operations Research and Social Justice Lab at Northeastern University. Prior to joining Northeastern, she earned a PhD in Industrial and Operations Engineering from the University of Michigan and completed her postdoctoral studies in the Department of Health Sciences Research at the Mayo Clinic. Her research focuses on advancing operations research methodology to address access and equity issues within human trafficking, mental health, housing, and food justice contexts, and centers interdisciplinary survivor-informed expertise. Dr. Maass’s research is supported by multiple federal grants, including from the National Science Foundation and the National Institute of Justice, and has been used to inform policy and operational decisions at the local, national, and international levels.

Recommendations in High-stakes Settings: Diversity and Monoculture

Algorithmic recommendation systems -- historically developed for settings such as movies, songs, and media content -- are now well-integrated into online matching platforms for high-stakes settings such as for labor, education, and dating. With this integration comes a renewed importance on challenges such as diversity (are you showing a diverse set to users) and monoculture (what are the consequences of everyone using the same algorithm). I'll describe some of our work in this space, emphasizing how OR techniques are essential to design more efficient, equitable algorithms for such platforms. Joint work with Kenny Peng and many others.

Bio: Nikhil Garg is an Assistant Professor of Operations Research and Information Engineering at Cornell Tech as part of the Jacobs Technion-Cornell Institute. He uses algorithms, data science, and mechanism design approaches to study democracy, markets, and societal systems at large. Nikhil has received the INFORMS George Dantzig Dissertation Award, an honorable mention for the ACM SIGecom dissertation award, several other best paper awards, and Forbes 30 under 30 for Science. He received his PhD from Stanford University and has spent considerable time in industry and working with practitioners.

February, 2024

Georgia Institute of Technology

University of Wisconsin, Madison

Leveraging Operations Research for Responsible AI in Medicine: Generating Clinical Role Models for Personalized Treatment Target Selection

Type 2 Diabetes (T2D) and Atherosclerotic Cardiovascular Disease (ASCVD) comprise a significant portion of all chronic disease in the United States. Existing clinical guidelines provide recommendations for managing T2D and ASCVD. However, most guidelines focus on each condition separately and there is little guidance for managing the two chronic diseases jointly. Further, many population-based guidelines are one-size-fits-most; consequently, individual differences in risk and treatment response are unaccounted for in these guidelines which may inadvertently widen existing health disparities. This talk will present the use of operations research methodologies to design responsible Artificial Intelligence (AI) technologies for the joint management of T2D and ASCVD. In particular, we study the problem of generating clinical role models whose physiological measurements can serve as personalized treatment targets for patients with T2D who are at high risk of ASCVD. Our goal is to design a framework that is interpretable for clinicians and robust to data shifts. We formulate the problem as a data-driven robust recourse optimization problem and derive a tractable reformulation of the problem by exploiting its structural properties. We also design and analyze an active learning algorithm to solve the problem efficiently. Our computational analyses using these models demonstrate the potential impact in patients’ health outcomes by adopting these technologies, as well as their health equity implications for diabetes care.

Bio: Gian-Gabriel Garcia is an Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. His research involves the design, analysis, and optimization of data-driven frameworks at the intersection of prediction and decision-making as motivated by high-impact problems in health policy and personalized medicine. This research spans applications to diabetes, cardiovascular disease, concussion, opioids, and maternal health. He has secured federal funding as PI from the National Institutes for Health and the Agency for Healthcare Research and Quality. His research has also received recognition through various awards, including the IISE Transactions Best Paper Award in Operations Engineering and Analytics, INFORMS Bonder Scholarship for Applied Operations Research in Health Services, INFORMS Minority Issues Forum Paper Competition, and SMDM Lee B. Lusted Prize in Quantitative Methods and Theoretical Developments.  Before joining Georgia Tech, he was a postdoctoral fellow at Harvard Medical School. He earned his PhD and MS in Industrial and Operations Engineering from the University of Michigan and his BS in Industrial Engineering from the University of Pittsburgh.

Optimizing Community Health Worker Interventions with Model Based Approaches

Diabetes is a global health priority, especially in lower-middle-income countries, where over 50% of premature deaths are attributed to high blood glucose. Several studies have demonstrated the feasibility of using Community Health Worker (CHW) programs to provide affordable and culturally tailored solutions for early detection and management of diabetes. To support the integration of CHWs into health systems and communities, the World Health Organization has stressed the need for evidence-based models for deployment and management of these programs. We address this need by developing a novel optimization framework that builds personalized CHW visit plans to maximize glycemic control at a community-level. Our model incorporates the tradeoff between providing screening and follow-up visits, as well as patient decisions to enroll, stay enrolled, or drop out of treatment based on disease progression and adverse factors associated with enrollment. We characterize the structure of the optimal solution and, to handle computational limitations, use our theoretical results to develop several approximate dynamic programming heuristics. Simulation experiments show that our approach can achieve a better performance than the best baseline heuristic. Future extensions of our work will include handling uncertainty in patients' disease status, disease progression, and enrollment decisions, as well as evaluating the cost-effectiveness of implementing our approach at the policy level.

Bio: Yonatan Mintz is an assistant professor in the Industrial and Systems Engineering department at the University of Wisconsin, Madison. His research focuses on the application of machine learning and automated decision making to human sensitive contexts. One application of his research has been on using patient level data, to create precision interventions. Yonatan is also interested in the sociotechnical implications of machine learning algorithms and has done work on fairness, accountability, and transparency in automated decision making.  In terms of methodology his research explores topics in machine learning theory, stochastic control, reinforcement learning, and nonconvex optimization. Yonatan's work has been recognized as a finalist in the INFORMS Health Applications Society Pierskalla Paper competition, a best poster award from the NeurIPS joint workshop on AI for Social Good, and he has been actively invited to publicly speak about his work in both print and televised media including PBS. His research has been funded by American Family Insurance. Prior to joining UW--Madison, Yonatan was a postdoctoral research fellow at the department of Industrial and Systems Engineering at the Georgia Institute of Technology. Yonatan received his B.S. in Industrial and Systems Engineering with a concentration in Operations Research from Georgia Tech in 2012, and his Ph.D. in Industrial Engineering and Operations Research from the University of California, Berkeley in 2018.

January, 2024

University of Texas at Austin

Rice University

An Efficient Gradient Tracking Algorithmic Framework for Decentralized Optimization

Gradient tracking optimization algorithms have received significant attention over the past years for distributed optimization over networks due to their ability to converge to the solution under a constant step size. At every iteration, these algorithms require a computation step of calculating gradients at each node and a communication step to share the iterate and a gradient estimate among nodes. The complexity of these two steps varies significantly across different applications of distributed optimization. In this talk, we present an algorithmic framework that decomposes these two steps, provides flexibility among these steps at each iteration, and is robust to the stochasticity inherent in these steps. We provide optimal theoretical convergence and complexity guarantees, and illustrate its performance on quadratic and classification problems.

Bio: Raghu Bollapragada is an assistant professor in the Operations Research and Industrial Engineering graduate program at the University of Texas at Austin (UT). Before joining UT, he was a postdoctoral researcher in the Mathematics and Computer Science Division at Argonne National Laboratory. He received both PhD and MS degrees in Industrial Engineering and Management Sciences from Northwestern University. During his graduate study, he was a visiting researcher at INRIA, Paris. His current research interests are in nonlinear optimization and its applications in machine learning.  He has received the IEMS Nemhauser Dissertation Award for best dissertation, the IEMS Arthur P. Hurter Award for outstanding academic excellence, the McCormick terminal year fellowship for outstanding terminal-year PhD candidate, and the Walter P. Murphy Fellowship at Northwestern University.

On Graphs with Finite-Time Consensus and Their Use in Gradient Tracking

In this talk, we present sequences of graphs satisfying the finite-time consensus property (i.e., iterating through such a finite sequence is equivalent to performing global or exact averaging) and their use in Gradient Tracking. We provide an explicit weight matrix representation of the studied sequences and prove its finite-time consensus property. Moreover, we incorporate the studied finite-time consensus topologies into Gradient Tracking and present a new algorithmic scheme called Gradient Tracking for Finite-Time Consensus Topologies (GT-FT). We analyze the new scheme for nonconvex problems with stochastic gradient estimates. Our analysis shows that the convergence rate of GT-FT does not depend on the heterogeneity of the agents' functions or the connectivity of any individual graph in the topology sequence. Furthermore, owing to the sparsity of the graphs, GT-FT requires lower communication costs than Gradient Tracking using the static counterpart of the topology sequence.

Bio: Cesar A. Uribe is the Louis Owen Assistant Professor at the Department of Electrical and Computer Engineering at Rice University. He received the M.Sc. degrees in systems and control from the Delft University of Technology in The Netherlands and in applied mathematics from the University of Illinois at Urbana-Champaign in 2013 and 2016, respectively. He also received the Ph.D. degree in electrical and computer engineering at the University of Illinois at Urbana-Champaign in 2018. He was a Postdoctoral Associate in the Laboratory for Information and Decision Systems-LIDS at the Massachusetts Institute of Technology-MIT until 2020. He held a visiting professor position at the Moscow Institute of Physics and Technology until 2022. His research interests include distributed learning and optimization, decentralized control, algorithm analysis, and computational optimal transport.

December, 2022

Lauren Steimle

Georgia Institute of Technology

Emily Tucker

Clemson University

Multi-Criteria Optimization to Inform Colleges’ Academic Operations Response to COVID-19

Although physical (or "social") distancing is an important public health intervention during the COVID-19 pandemic, physical distancing dramatically reduced the effective capacity of classrooms. This presented a unique problem to campus planners who hoped to deliver a meaningful amount of in-person instruction in a way that respected physical distancing. This process involved (1) assigning a mode to each offered class as remote, residential (in-person), or hybrid and (2) reassigning classrooms under severely reduced capacities to the non-remote classes. These decisions needed to be made quickly and under several constraints and competing priorities, such as restrictions on changes to the timetable of classes, trade-offs between classroom density and educational benefits of in-person versus online instruction, and administrative preferences for course modes and classrooms reassignments. We solve a flexible integer program and use hierarchical optimization to handle the multiple criteria according to priorities. We show that our optimization model is able to provide a significant improvement on several metrics representing the amount of physically-distanced in-person instruction delivered when compared to a strategy in which no rooms are reassigned. We discuss how this model informed an iterative and collaborative decision-making process with the Georgia Tech COVID-19 Task Force throughout the summer of 2020.

Stakeholder-Engaged Model Development: A Case Study of COVID-19 Response at Clemson University

The onset of the COVID-19 pandemic in Spring 2020 forced universities in the United States to quickly shift to remote learning in order to protect university students, faculty, and staff. During the summer of 2020, Clemson University prioritized a safe return to in-person learning. Several concurrent modeling efforts sought to locate hand sanitizers, schedule classes, and determine rotating, in-person cohorts. The talk will briefly survey each and then focus on two areas. (1) I will present a process to integrate stakeholder feedback into model development. Model iterations are driven by human factors analysis of qualitative interview data, and we suggest that this fusion is broadly applicable to the real-world application of optimization in other areas. (2) I will present a framework to use optimization in response to fundamental surprise events. Adaptations range from minor model changes such as adapting data to the creation of entirely new models.

References:

O'Brien TC, Foster S, Tucker EL, Hegde S. "Iterative Location Modeling of Hand Sanitizer Deployment Based upon Qualitative Interviews" https://arxiv.org/abs/2204.00609

Sharkey TC, Foster S, Hegde S, Kurz ME, Tucker EL. "A Framework for Operations Research Model Use in Resilience to Fundamental Surprise Events: Observations from University Operations during COVID-19" https://arxiv.org/abs/2210.08963

November 11, 2022

Jacob Mays

Cornell University

Lesia Mitridati

Technical University of Denmark (DTU)

Markets for Zero-Carbon Electricity

In this talk, I will introduce three major challenges for operations researchers as wholesale electricity markets transition to carbon-free resources. The first challenge is formation of spot prices that adequately convey the reliability and flexibility needs of the system. The second centers on volatility, risk management, and supporting efficient long-run investment decisions. The third concerns coordination of decentralized investment in generation resources with centralized planning of transmission networks.

The Cost of Privacy in the Coordination of Energy Markets

Sector coordination between energy sectors has been identified has a cornerstone in the path towards a more sustainable energy system. However, the coordination of sequential and independent markets relies on the exchange of sensitive information between market and system operators, namely time series of consumers' loads. In this talk we address the privacy concerns arising from this exchange by introducing a novel privacy-preserving Stackelberg mechanism (w-PPSM) which generates differentially-private data streams with high fidelity. The novelty of the algorithm is to optimally redistribute the noise introduced by the traditional differentially-private Laplace mechanism to limit the cost of privacy in each energy sector and ensure a close-to-optimal operation of the markets. We derive theoretical bounds on the cost of privacy introduced by the w-PPSM. Furthermore, through multiple numerical simulations in a realistic energy system, we demonstrate that the w-PPSM can achieve up to two orders of magnitude reduction in the cost of privacy compared to the Laplace mechanism. This approach facilitates the exchange of privacy-preserving information between independent market and system operators in energy systems while ensuring near-to-optimal coordination between them. It also opens the way to quantify the value of information and design privacy-aware market mechanisms.

September 16, 2022

Sara Shashaani

North Carolina State University

Eunhye Song

Georgia Institute of Technology

monte carlo-based machine learning

While the simulation methodology is mainly used for computer models with inexact outputs, there is merit in viewing results from samples of an existing dataset as replications of a stochastic simulation. This new view of machine learning leads to prediction models within a Monte Carlo approach, which allows more direct accountability for the underlying data distribution when building the models. We opt for nonparametric input uncertainty with multi-level bootstrapping to make the framework applicable to large datasets. The cost of Monte Carlo-based model construction is controllable with optimal designs of nested bootstrapping and integrating variance reduction strategies. The benefit is substantial in providing more robust predictions. Along with reassuring statistical properties, the implementation of the proposed method in several data-driven problems further indicates its superiority over state-of-the-art techniques. 

Selection of the most probable best

In many business applications, simulation is the primary decision-making tool for a complex stochastic system, where an analytical expression of the problem is unavailable. Often, parameters of these simulators are unknown and must be estimated from data. When plug-in estimates of the parameters are adopted, there is a risk of making a suboptimal decision due to the estimation error in the parameter values. Under this type of model risk, this talk introduces a new decision-making framework, the most probable best (MPB), in the context of simulation optimization. The MPB is defined as the solution whose posterior probability of being optimal is the largest given the data and the simulation outputs. Based on the large-deviation theory, we propose efficient sequential sampling algorithms to find the MPB and discuss their asymptotic optimality (in efficiency). To demonstrate business insights the MPB formulation provides, a product portfolio optimization problem, where consumer utility parameters are estimated from conjoint survey data will be presented.