Spring 2022

Ramon van Handel

Princeton University

Sharp matrix concentration inequalities

Date: Jan 24, 2022



More info.

Abstract

Classical matrix concentration inequalities give nonasymptotic bounds on the spectral norm of sums of independent random matrices. These bounds are extremely versatile and are widely used in applications. On the other hand, it is well known that these inequalities fail to yield sharp results for even the simplest random matrix models. In this talk, I will describe a powerful new class of matrix concentration inequalities that are able to capture the sharp behavior of the spectrum of many random matrix models. These inequalities arise from an unexpected phenomenon: the spectrum of random matrices is accurately captured by certain predictions of free probability theory under surprisingly minimal assumptions. I aim to explain what these new inequalities look like and how to use them.

The talk is based on joint works with Afonso Bandeira and March Boedihardjo, and with Tatiana Brailovskaya.

Bio

Ramon van Handel is an Associate Professor at Princeton University. His interests lie broadly in probability theory, analysis, geometry, and their interactions.

Rhonda Righter

The University of California, Berkeley

Service and Matching Systems with Compatibility Constraints (slides)

Date: Jan 31, 2022



More info.

Abstract

In large service systems, such as cloud computing systems, there are different classes of jobs and of servers such that each job class can only be done on a subset of the server classes, due to data locality and other constraints. Similarly, there are often compatibility constraints in dynamic matching systems such as platforms for car sharing and waitlists for organ transplants. For Markovian FCFS systems, the steady-state distributions have been shown to have a simple “product-form” structure. I will describe a unified framework for these models that provides a common simple proof for the product-form results at a detailed state level, and provides a simple, state-aggregated, view for analyzing waiting time distributions. I will also discuss recent results comparing collaborative (cancel-on-complete) and noncollaborative (cancel-on-start) protocols, and the impacts of flexibility.

Joint work with Ivo Adan, Igor Kleiner, Kristen Gardner, and Gideon Weiss

Bio

Rhonda Righter is a Professor and past Chair of the Department of Industrial Engineering and Operations Research at the University of California, Berkeley. Before joining the faculty at Berkeley she taught at the Leavey School of Business at Santa Clara University. Her PhD is in Industrial Engineering and Operations Research from UC Berkeley and her BS is in applied math and business from Carnegie Mellon. Her primary research and teaching interests are in the general area of stochastic modeling and optimization, especially as applied to service, manufacturing, telecommunications, and large-scale computing systems.

She is an associate editor for Queueing Systems, Probability in the Engineering and Informational Sciences, Stochastic Models, and the INFORMS Service Science Journal. She has also served on the editorial boards of Management Science, Operations Research, Operations Research Letters, the Journal of Scheduling, and Naval Research Logistics. She is the past (founding) Chair of the Applied Probability Society (APS) of INFORMS and currently serves on the APS Prize Committee.

Henry Lam

Columbia University

Cheap Bootstrap for Fast Uncertainty Quantification (slides)

Date: Feb 07, 2022



More info.

Abstract

The bootstrap is a versatile method for statistical uncertainty quantification, but when applied to large-scale or simulation-based models, it could face substantial computation demand from repeated data resampling and model refitting. We present a bootstrap methodology that uses minimal computation, namely with a resample effort as low as one Monte Carlo replication, while maintaining desirable statistical guarantees. We describe how this methodology can be used for fast inference across different estimation problems. We also discuss its particular relevance and generalizations to handling uncertainties in stochastic simulation analysis.

Bio

Henry Lam is an Associate Professor in the Department of Industrial Engineering and Operations Research at Columbia University. He received his Ph.D. degree in statistics from Harvard University in 2011, and was on the faculty of Boston University and the University of Michigan before joining Columbia in 2017. His research interests include Monte Carlo methods, uncertainty quantification, data-driven optimization and rare-event analysis. His works have been recognized by several venues such as the NSF CAREER Award, JP Morgan Chase Faculty Research Award and Adobe Faculty Research Award.

Henry serves on the editorial boards of Operations Research, INFORMS Journal on Computing, Applied Probability Journals, Stochastic Models, Manufacturing and Service Operations Management, and Queueing Systems, and as the Area Editor in Stochastic Models and Data Science in Operations Research Letters. He co-chaired the Applied Probability Cluster at INFORMS Annual Meeting 2018 and served as a council member of the INFORMS Applied Probability Society in 2017-2019.

Chang-Han Rhee

Northwestern University

Eliminating Sharp Minima from SGD with Truncated Heavy-Tailed Noise (slides)

Date: Feb 21, 2022



More info.

Abstract

The empirical success of deep learning is often attributed to SGD’s mysterious ability to avoid sharp local minima in the loss landscape, as sharp minima are known to lead to poor generalization. Recently, empirical evidence of heavy-tailed gradient noise was reported in many deep learning tasks, and it was argued that SGD can escape sharp local minima under the presence of such heavy-tailed gradient noise, providing a partial explanation to the mystery. This talk analyzes a popular variant of SGD where gradients are truncated above a fixed threshold. We show that it achieves a stronger notion of avoiding sharp minima: it can effectively eliminate sharp local minima entirely from its training trajectory. Further, we rigorously characterize the first exit times from local minima and prove that under some structural conditions, the dynamics of heavy-tailed truncated SGD with small learning rates closely resemble those of a continuous-time Markov chain that never visits any sharp minima. Real data experiments on deep neural networks confirm our theoretical prediction that SGD with truncated heavy-tailed gradient noise finds flatter local minima and achieves better generalization.

This talk is based on the joint work with Xingyu Wang and Sewoong Oh.

Bio

Chang-Han Rhee is an Assistant Professor in Industrial Engineering and Management Sciences at Northwestern University. Before joining Northwestern University, he was a postdoctoral researcher at Centrum Wiskunde & Informatica and Georgia Tech. He received his Ph.D. in Computational and Mathematical Engineering from Stanford University. His research interests include applied probability, stochastic simulation, and machine learning. He was a winner of the Outstanding Publication Award from the INFORMS Simulation Society in 2016, winner of the Best Student Paper Award at the 2012 Winter Simulation Conference, and finalist of 2013 INFORMS George Nicholson Student Paper Competition.

Christina Lee Yu

Cornell University

Overcoming the Long Horizon Barrier for Sample-Efficient Reinforcement Learning with Latent Low-Rank Structure (slides)

Date: Feb 28, 2022


More info.

Abstract

The practicality of reinforcement learning algorithms has been limited due to poor scaling with respect to the problem size, as the sample complexity of learning an epsilon-optimal policy scales with |S| times |A| over worst case instances of an MDP with state space S, action space A, and horizon H. A key question is when can we design probably efficient RL algorithms that exploit nice structure? We consider a class of MDPs that exhibit low rank structure, where the latent features are unknown. We argue that a natural combination of value iteration and low-rank matrix estimation results in an estimation error that grows doubly exponentially in the horizon length. We then provide a new algorithm along with statistical guarantees that efficiently exploits low rank structure given access to a generative model, achieving a sample complexity that scales linearly in |S|+|A| and polynomially in the horizon and the rank. In contrast to literature on linear and low-rank MDPs, we do not require a known feature mapping, our algorithm is computationally simple, and our results hold for long time horizons. Our results provide insights on the minimal low-rank structural assumptions required on the MDP with respect to the transition kernel versus the optimal action-value function.

Bio

Christina Lee Yu is an Assistant Professor at Cornell University in the School of Operations Research and Information Engineering. Prior to Cornell, she was a postdoc at Microsoft Research New England. She received her PhD and MS in Electrical Engineering and Computer Science from Massachusetts Institute of Technology, and she received her BS in Computer Science from California Institute of Technology. She received honorable mention for the 2018 INFORMS Dantzig Dissertation Award, and she is a recipient of the 2021 Intel Rising Stars Award and 2021 JPMorgan Faculty Research Award. Her research interests include algorithm design and analysis, high dimensional statistics, inference over networks, sequential decision making under uncertainty, online learning, and network causal inference.

Alexander Stolyar

University of Illinois at Urbana-Champaign

Non-Work-Conserving Load Distribution Systems (slides)

Date: Mar 07, 2022



More info.

Abstract

A parallel server system with so-called cancel-on-completion redundancy is considered. A key feature of this model, making its analysis challenging, is that it is in general non-work-conserving: the average amount of new workload added to the system by an arriving job depends on the system state at the time of arrival. Our main focus is on the mean-field asymptotic regime where the number of servers goes to infinity with the job arrival rate per server remaining constant. The main results address the following question: under which conditions the steady-state asymptotic independence of server workloads holds. Our analysis relies almost exclusively on two fundamental properties of the model: monotonicity and the property that, on average, "new arriving workload prefers to go to servers with lower workloads.”

Bio

Alexander Stolyar. Since 2017 Stolyar is a Professor in the ISE Department and Coordinated Science Lab at UIUC. His research interests are in stochastic processes, queueing theory, and stochastic modeling of information, communication, service and supply chain systems. He received Ph.D. in Mathematics from the Institute of Control Science, Moscow, in 1989, and was a research scientist at the Institute of Control Science in 1989-1991. In 1992-1998 he was working on stochastic models in telecommunications at Motorola and AT&T Research. From 1998 to 2014 he was with the Bell Labs Mathematical Sciences Research, Murray Hill, New Jersey, working on stochastic networks and resource allocation problems in a variety of applications, including wireless and wireline communications, service systems, network clouds. In 2014-2016 he was a Professor in the ISE Department at Lehigh University. He is an INFORMS Fellow, and received INFORMS Applied Probability Society 2004 Best Publication award and SIGMETRICS'1996 Best Paper award.

Anton Braverman

Kellogg School of Business, Northwestern University

The join-the-shortest-queue system in the Halfin-Whitt regime: rates of convergence to the diffusion limit (slides)

Date: Mar 21, 2022



More info.

Abstract

Parallel-server systems have received a lot of attention over the last several years and, in particular, a lot of effort has gone towards understanding the join-the-shortest-queue (JSQ) system. In this talk I will show that in the Halfin-Whitt regime, the steady-state distribution of the JSQ system converges to its diffusion limit at a rate of 1/sqrt(n), where n is the number of servers. The proof uses Stein's method and, specifically, the recently proposed prelimit generator approach. This particular application of Stein's method is nontrivial because both the JSQ model and its diffusion limit are multidimensional, and state-space collapse is involved.

Both this result and the underlying methodology are part of a broader agenda to make Stein's method more user friendly and expand the class of models for which it can be applied. Time permitting, I will give a brief overview of the current frontiers and limitations of the approach.

Bio

Anton Braverman joined the Operations group at Kellogg in 2017. He completed his PhD in Operations Research from Cornell University, and holds a Bachelor's degree in Mathematics and Statistics from the University of Toronto. Anton's research is focused on stochastic modelling and applied probability.

Some application domains of interest include ridesharing services, as well as healthcare operations.

Daniel Russo

Columbia Business School

Adaptivity and Confounding in Multi-Armed Bandit Experiments (slides)

Date: Mar 28, 2022



More info.

Abstract

This talk explores a new model of bandit experiments where a potentially nonstationary sequence of contexts influences arms' performance. Context-unaware algorithms risk confounding while those that perform correct inference face information delays. Our main insight is that an algorithm we call deconfounted Thompson sampling strikes a delicate balance between adaptivity and robustness. Its adaptivity leads to optimal efficiency properties in easy stationary instances, but it displays surprising resilience in hard nonstationary ones which cause other adaptive algorithms to fail.

Bio

Daniel Russo joined the Decision, Risk, and Operations division of the Columbia Business School as an assistant professor in Summer 2017. Prior to joining Columbia, Russo spent one great year as an assistant professor in the MEDS department at Northwestern's Kellogg School of Management and one year at Microsoft Research in New England as Postdoctoral Researcher. He received his PhD from Stanford University in 2015, where he was advised by Benjamin Van Roy. In 2011, Russo received his BS in Mathematics and Economics from the University of Michigan.

Russo's research lies at the intersection of statistical machine learning and sequential decision-making, and contributes to the fields of online optimization, reinforcement learning, and sequential design of experiments. He is interested in the design and analysis of algorithms that learn over time to make increasingly effective decisions through interacting with a poorly understood environment.

Adam Wierman

California Institute of Technology

Online Optimization and Control using Black-Box Predictions (slides)

Date: April 04, 2022



More info.

Abstract

Making use of modern black-box AI tools is potentially transformational for online optimization and control. However, such machine-learned algorithms typically do not have formal guarantees on their worst-case performance, stability, or safety. So, while their performance may improve upon traditional approaches in “typical” cases, they may perform arbitrarily worse in scenarios where the training examples are not representative due to, e.g., distribution shift or unrepresentative training data. This represents a significant drawback when considering the use of AI tools for energy systems and autonomous cities, which are safety-critical. A challenging open question is thus: Is it possible to provide guarantees that allow black-box AI tools to be used in safety-critical applications? In this talk, I will introduce recent work that aims to develop algorithms that make use of black-box AI tools to provide good performance in the typical case while integrating the “untrusted advice” from these algorithms into traditional algorithms to ensure formal worst-case guarantees. Specifically, we will discuss the use of black-box untrusted advice in the context of online convex body chasing, online non-convex optimization, and linear quadratic control, identifying both novel algorithms and fundamental limits in each case.

Bio

Adam Wierman is a Professor in the Department of Computing and Mathematical Sciences at Caltech. He received his Ph.D., M.Sc., and B.Sc. in Computer Science from Carnegie Mellon University and has been a faculty at Caltech since 2007. Adam’s research strives to make the networked systems that govern our world sustainable and resilient. He is best known for his work spearheading the design of algorithms for sustainable data centers and his co-authored book on “The Fundamentals of Heavy-tails”. He is a recipient of multiple awards, including the ACM Sigmetrics Rising Star award, the ACM Sigmetrics Test of Time award, the IEEE Communications Society William R. Bennett Prize, multiple teaching awards, and is a co-author of papers that have received “best paper” awards at a wide variety of conferences across computer science, power engineering, and operations research.

Lawrence M. Wein

Stanford University

Analysis of the Genealogy Process in Forensic Genetic Genealogy (slides)

Date: April 25, 2022



More info.

Abstract

The genealogy process is typically the most time-consuming part of -- and a limiting factor in the success of -- forensic genetic genealogy, which is a new approach to solving violent crimes and identifying human remains. We formulate a stochastic dynamic program that -- given the list of matches and their genetic distances to the unknown target -- chooses the best decision at each point in time: which match to investigate (i.e., find its ancestors), which set of potential common ancestors to descend from (i.e., find its descendants), or whether to terminate the investigation. The objective is to maximize the probability of finding the target minus a cost on the expected size of the final family tree. We estimate the parameters of our model using data from 17 cases (eight solved, nine unsolved) from the DNA Doe Project. We assess the Proposed Strategy using simulated versions of the 17 DNA Doe Project cases, and compare it to a Benchmark Strategy that ranks matches by their genetic distance to the target and only descends from known common ancestors between a pair of matches. The Proposed Strategy solves cases 10-fold faster than the Benchmark Strategy, and does so by aggressively descending from a set of potential most recent common ancestors between the target and a match even when this set has a low probability of containing the correct most recent common ancestor.

Bio

Lawrence M. Wein is the Jeffrey S. Skoll Professor of Management Science and a Senior Associate Dean of Academic Affairs at the Graduate School of Business, Stanford University. He received a Ph.D. in Operations Research at Stanford University in 1988 and was a professor at MIT's Sloan School of Management from 1988 to 2002. His research interests are in operations management and public health. He was Editor-in-Chief of Operations Research from 2000 to 2005. He has been awarded a Presidential Young Investigator Award, the Erlang Prize, the Koopman Prize, the INFORMS Expository Writing Award, the Philip McCord Morse Lectureship, the INFORMS President’s Award, the Frederick W. Lanchester Prize, the George E. Kimball Medal, a best paper award from Risk Analysis, and a notable paper award from the Journal of Forensic Sciences. He is an INFORMS Fellow, a M&SOM Fellow and a member of the National Academy of Engineering.

Alessandro Arlotto

Duke University

On the Constant Regret in Multi-Secretary and Dynamic Resource Allocation Problems (slides)

Date: May 02, 2022



More info.

Abstract

In the multi-secretary problem, T nonnegative, independent, random variables with common distribution are sequentially presented to a decision maker who is allowed to make k ≤ T selections to maximize the expected total value of the selected elements. Assuming that the values are drawn from a known distribution with finite support, we develop an adaptive budget-ratio policy and prove that its regret—the expected gap between the budget-ratio policy and the optimal offline solution in which all T values are made visible at time 0—is bounded by a constant that does not depend on the time horizon T and on the budget k. When we generalize the arrival process of this multi-secretary problem to one in which the arrivals in each period can be of different types and characterized by their rewards and by the resources they consume, we recover a general resource allocation problem. This (multidimensional) generalization, however, does not hinder our ability to construct an adaptive budget-ratio policy that continues to exhibit a regret that does not depend on the time horizon T or on the initial budget of each resource. Our analysis focuses on the geometric evolution of the remaining resource budgets as a stochastic process and proving that it is attracted to ``sticky'' regions of the state space where the budget-ratio policy takes actions consistent with the optimal offline solution. (Based on joint work with A. Vera, I. Gurvich and E. Levin.)

Bio

Alessandro Arlotto is an Associate Professor of Business Administration and Mathematics at Duke University’s Fuqua School of Business. Alessandro received his Ph.D. in 2012 from the University of Pennsylvania. Alessandro’s research interests are in applied probability, stochastic modelling, stochastic dynamic programming, and their applications His research has appeared in several journals including the Annals of Applied Probability, Management Science, Mathematics of Operations Research, Operations Research, Stochastic Systems, and Stochastic Processes and their Applications. Alessandro is a recipient of the Faculty Early Career Development (CAREER) award from the National Science Foundation, and a winner of the 2021 Best Publication Award of the INFORMS Applied Probability Society. Alessandro currently serves as Associate Editor for the journals Operations Research and Stochastic Systems.

David A. Goldberg

Cornell University

New approach to high-dimensional optimal stopping and control (slides)

Date: May 09, 2022



More info.

Abstract

The fundamental problem of high-dimensional path-dependent optimal stopping is central to applied probability, financial engineering, operations research, and stochastic control. Modern approaches, often relying on ADP, simulation, and/or duality, typically have limited rigorous guarantees, which may scale poorly and/or require previous knowledge of good basis functions. A key difficulty with many approaches is that to yield stronger guarantees, they would necessitate the computation of deeply nested conditional expectations, with the depth scaling with the time horizon T.

We overcome this fundamental obstacle by providing an algorithm which can trade-off between the guaranteed quality of approximation and the level of nesting required in a principled manner. This leads to a representation for the optimal value as an infinite sum for which: 1. each term is the expectation of an elegant recursively defined infimum; 2. the first k terms only require k levels of nesting; and 3. truncating at the first k terms yields a (normalized) error of 1/k. This enables us to devise simple randomized and data-driven algorithms that can trade-off between accuracy and runtime through a parameter epsilon controlling the performance guarantee (analogous to the notion of PTAS). The computational and sample complexity achieved are both polynomial in T (and effectively independent of the dimension) for any fixed epsilon, in contrast to past methods typically requiring a complexity scaling exponentially. Time permitting, we will also discuss connections to the theory of network flows and stochastic optimization, generalizations to dynamic pricing and online combinatorial optimization, and lower bounds for the trade-off between accuracy and the level of nesting required. Joint work with Yilun Chen.

Bio

David A. Goldberg is an Associate Professor in Cornell's ORIE department. Previously, he was the A. Russell Chandler III Associate Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. He received his Ph.D. in Operations Research at MIT in 2011, and his B.S. in Computer Science (minors in Operations Research and Applied Math) from Columbia University in 2006. Goldberg’s research is in applied probability, on topics including : optimal stopping, control, and options pricing; inventory and queueing models; asymptotic analysis and large deviations; combinatorial optimization; and robust optimization. His work has been recognized with accolades including an NSF CAREER award, 2019 INFORMS Applied Probability Society Best Publication award, 2019 INFORMS Nicholson Competition first place, 2018 INFORMS Finance Student Paper Competition finalist, 2015 INFORMS Nicholson Competition first place, 2015 INFORMS JFIG Competition second place, and 2014 MSOM and 2010 INFORMS Nicholson Competitions finalist. He is also an associate editor for the journals Operations Research and Stochastic Models, and vice chair of the INFORMS Applied Probability Society.

Itai Gurvich

Kellogg School of Business, Northwestern University

Near-Optimal Policies for Dynamic Matching (slides)

Date: May 16, 2022



More info.

Abstract

We consider centralized dynamic matching markets with finitely many agent types and heterogeneous match values. A network topology determines the feasible matches in the market and the value generated from each match. An inherent trade-off arises between short- and long-term objectives. A social planner may delay match decisions to thicken the market and increase match opportunities to generate high value. This inevitably compromises short-term value, and the planner may match greedily to maximize short-term objectives.

A matching policy is hindsight optimal if the policy can (nearly) maximize the total value simultaneously at all times. We first establish that in multi-way networks, where a match can include more than two agent types, acting greedily is suboptimal, and a periodic clearing policy with a carefully chosen period length is hindsight optimal. Interestingly, in two-way networks, where any match includes two agent types, suitably designed greedy policies also achieve hindsight optimality. This implies that there is essentially no positive externality from having agents waiting to form future matches.

Central to our results is the general position gap, ε, which quantifies the stability or the imbalance in the network. No policy can achieve a regret that is lower than the order of 1/ε at all times. This lower bound is achieved by the proposed policies.

The talk is based on joint work with Suleyman Kerimov and Itai Ashlagi.

Bio

Itai Gurvich is a Professor at the Kellogg School of Management, Northwestern University. He earned a Ph.D. from the Decision, Risk and Operations department at Columbia University’s Graduate School of Business in 2008. After his PhD, he spent 8 years at Kellogg and 4 years at Cornell University’s campus in New York City (Cornell Tech) before returning to Kellogg in 2021. His research centers on the performance analysis and optimization of stochastic processing networks. Itai teaches courses in operations management and service engineering.

David Yao

Columbia University

Square-Root Voting: Related Stochastic Processes and Limiting Regimes (slides)

Date: May 23, 2022



More info.

Abstract

The square-root voting rule was first proposed by Lionel Penrose (Roger's father), that each member country of a world assembly (such as UN) be given a number of votes equal to the square root of its population. The idea has become newly relevant in the POS (proof of stake) scheme of cryptocurrency (e.g., Ethereum) and other decentralized voting mechanisms. Focusing on the processes representing the total wealth over time and each participating entity’s share, we study their limiting distributions and related concentration inequalities. (Joint work with Wenpin Tang, Columbia/IEOR.)

Bio

David Yao is the Piyasombatkul Family Professor of Industrial Engineering and Operations Research at Columbia University, where he also serves as co-chair of the Financial and Business Analytics Center at Columbia Data Science Institute. His research and teaching interests are in stochastic systems, focusing on problems in resource control and risk analyses. He is an IEEE Fellow, an INFORMS Fellow, and a member of the National Academy of Engineering.

Assaf Zeevi

Columbia University

Structured Learning in Sequential Selection Problems

Date: June 13, 2022



More info.

Abstract

In this talk I will describe two vignettes of sequential decision making problems arising in the OR literature. The first is the classical “house selling” problem (or optimal stopping): given a random sequence of independent observations revealed one at a time over some finite horizon of play, the objective is to design an algorithm that “stops’’ this sequence to maximize the expected value of the ``stopped” observation. The second vignette is the worker assignment problem (or sequential stochastic assignment): arriving items (“jobs”) with independent stochastic attributes need to be sequentially matched to a pool of awaiting recipients (“workers”). Once each job/worker are matched they are no longer admissible for further assignment, and the objective is to maximize the expected cumulative value of the resulting matchings. Both problems date back more than 50 years and can be solved in a relatively straightforward manner using dynamic programming; both have been used as modeling constructs in numerous applications including recent online marketplaces. Somewhat surprisingly, neither has been studied extensively when the underlying information on the stochastic primitives is unknown a priori. While it is possible to analyze this incomplete information setting with generic multi-purpose learning theoretic methods, these tend to be quite inefficient when further “structure” is present in the problem and can be exploited to construct more customized algorithms. The main focus of this talk is to broadly describe possible learning theoretic formulations of the two vignettes, and illustrate some “design principles” for constructing near-optimal policies.

Bio

Assaf Zeevi is Professor and holder of the Kravis chair at the Graduate School of Business, Columbia University. His research and teaching interests lie at the intersection of Operations Research, Statistics, and Machine Learning. In particular, he has been developing theory and algorithms for reinforcement learning, Bandit problems, stochastic optimization, statistical learning and stochastic networks. Assaf's work has found applications in online retail, healthcare analytics, dynamic pricing, recommender systems, and social learning in online marketplaces. Assaf received his B.Sc. and M.Sc. (Cum Laude) from the Technion, in Israel, and subsequently his Ph.D. from Stanford University. He spent time as a visitor at Stanford University, the Technion and Tel Aviv University. He is the recipient of several teaching and research awards including a CAREER Award from the National Science Foundation, an IBM Faculty Award, Google Research Award, as well as several best paper awards including the 2019 Lanchester Prize. Assaf has recently served a term as Vice Dean at Columbia Business School and Editor-in-Chief of Stochastic Systems (the flagship journal of INFORMS' Applied Probability Society). He also serves on various other editorial boards and program committees in the Operations Research and Machine Learning communities, as well as scientific advisory boards for startup companies in the high technology sector.