Fall 2021

Vivek Shripad Borkar

Indian Institute of Technology Bombay

Dynamic choice with reinforcement under graphical constraints (Slides)

Date: July 27, 2021



More info.

Abstract

We consider a model of dynamic choice over items identified with the nodes of a graph, with positive reinforcement modeled by a transition probability proportional to an α-homogeneous (α > 0) positive function of cumulative reward for the item, but under graphical constraints that restrict the choice at the next instant to the neighbors of the current node. The corresponding empirical process can be written as a stochastic approximation with Markov noise and has a dynamics similar to that of the empirical process of a vertex-reinforced random walk. We analyze its asymptotic behavior using a limiting differential equation, both for fixed α and as α 🠑 ∞ slowly, in what we dub the `annealed dynamics'. In particular, we show convergence to a local, resp., global maximum of a certain `potential'. (Joint with Konstantin Avrachenkov (INRIA Sophia Antipolis), Sharayu Moharir (IIT Bombay) and Suhail Mohmad Shah (Hong Kong Uni.\ of Science and Technology))

Bio

Vivek Borkar obtained his B.Tech. (EE '76) from IIT Bombay, M.S. (Systems and Control '77) from Case Western Reserve Uni., and Ph.D. (EECS '80) from Uni. of California, Berkeley). He has held positions in TIFR Centre and Indian Institute of Science in Bengaluru, and Tata Institute of Fundamental Research and IIT Bombay in Mumbai, where he is currently an S. S. Bhatnagar Emeritus Fellow. He is a Fellow of IEEE, AMS, and science and engineering academies in India. His research interests are in stochastic optimization and control - theory, algorithms and applications, particularly to communications and machine learning.

Shipra Agrawal

Columbia University

Dynamic Pricing and Learning under the Bass Model (Slides)

Date: July 19, 2021



More info.

Abstract

We consider a novel formulation of the dynamic pricing and demand learning problem, where the evolution of demand in response to posted prices is governed by a stochastic variant of the popular Bass model with parameters (α, β) that are linked to the so-called "innovation" and "imitation" effects. Unlike the more commonly used i.i.d. demand models, in this model the price posted not only affects the demand and the revenue in the current round but also the evolution of demand, and hence the fraction of market potential that can be captured, in future rounds. Finding a revenue-maximizing dynamic pricing policy in this model is non-trivial even when model parameters are known, and requires solving for the optimal non-stationary policy of a continuous-time, continuous-state MDP. In this paper, we consider the problem of dynamic pricing is used in conjunction with learning the model parameters, with the objective of optimizing the cumulative revenues over a given selling horizon. Our main contribution is an algorithm with a regret guarantee of O (m^2/3), where m is mnemonic for the (known) market size. Moreover, we show that no algorithm can incur smaller order of loss by deriving a matching lower bound. We observe that in this problem the market size m, and not the time horizon T, is the fundamental driver of the complexity; our lower bound in fact indicates that for any fixed α,β, most non-trivial instances of the problem have constant T and large m. This insight sets the problem setting considered here uniquely apart from the MAB type formulations typically considered in the learning to price literature.

Bio

Shipra Agrawal is associate professor of Industrial Engineering and Operations Research at Columbia University, and an affiliate of Computer Science and Data Science Institute. Her research spans several areas of optimization and machine learning, including multi-armed bandits, online learning, and reinforcement learning. Shipra serves as an associate editor for Management Science, Mathematics of Operations Research, INFORMS Journal on Optimization and Journal of Machine Learning Research. Her research is supported by a Google Faculty research award, Amazon research award, and an NSF CAREER Award.


Kuang Xu

Stanford University

Diffusion Asymptotics for Sequential Experiments (paper link) (Slides)

Date: Aug 16, 2021



More info.

Abstract

We propose a new diffusion-asymptotic analysis for sequentially randomized experiments, including those that arise in solving multi-armed bandit problems. In an experiment with n time steps, we let the mean reward gaps between actions scale to the order 1/\sqrt{n} so as to preserve the difficulty of the learning task as n grows. In this regime, we show that the behavior of a class of sequentially randomized Markov experiments converges to a diffusion limit, given as the solution of a stochastic differential equation. The diffusion limit thus enables us to derive a refined, instance-specific characterization of the stochastic dynamics of adaptive experiments. As an application of this framework, we use the diffusion limit to obtain several new insights on the regret and belief evolution of Thompson sampling. We show that a version of Thompson sampling with an asymptotically uninformative prior variance achieves nearly-optimal instance-specific regret scaling when the reward gaps are relatively large. We also demonstrate that, in this regime, the posterior beliefs underlying Thompson sampling are highly unstable over time.

Bio

Kuang Xu is an Associate Professor of Operations, Information and Technology at Stanford Graduate School of Business, and Associate Professor by courtesy with the Electrical Engineering Department, Stanford University. Born in Suzhou, China, he received the B.S. degree in Electrical Engineering (2009) from the University of Illinois at Urbana-Champaign, and the Ph.D. degree in Electrical Engineering and Computer Science (2014) from the Massachusetts Institute of Technology. His research focuses on understanding fundamental properties and design principles of large-scale stochastic systems using tools from probability theory and optimization, with applications in queueing networks, privacy and machine learning. He is a recipient of the First Place in the INFORMS George E. Nicholson Student Paper Competition (2011), the Best Paper Award, as well as the Kenneth C. Sevcik Outstanding Student Paper Award at ACM SIGMETRICS (2013), and the ACM SIGMETRICS Rising Star Research Award (2020). He currently serves as an Associate Editor for Operations Research.

Jamol J. Pender

Cornell University

Beyond Safety Drivers: Staffing a Teleoperations Systems for Autonomous Vehicles (Slides)

Date: Aug 23, 2021



More info.

Abstract

Driverless vehicles promise a host of societal benefits including dramatically improved safety, increased accessibility, greater productivity, and higher quality of life. As this new technology approaches widespread deployment, both industry and government are making provisions for teleoperations systems, in which remote human agents provide assistance to driverless vehicles. This assistance can involve real-time remote operation and even ahead-of-time input via human-in-the-loop artificial intelligence systems. In this paper, we address the problem of staffing such a remote support center. Our analysis focuses on the tradeoffs between the total number of remote agents, the reliability of the remote support system, and the resulting safety of the driverless vehicles. By establishing a novel connection between queues with large batch arrivals and storage processes, we determine the probability of the system exceeding its service capacity. This connection drives our staffing methodology. We also develop a numerical method to compute the exact staffing level needed to achieve various performance measures. Our overall staffing analysis may be of use in other applications that combine human expertise and automated systems.

Bio

Jamol Pender is an Associate Professor of Operations Research and Information Engineering at Cornell University. He received his Ph.D. from Princeton University in 2013 where he was advised by Dr. William Massey. He joined Cornell University in 2015, and serves as a Faculty Fellow in the Residence Halls at Cornell. Jamol is a recipient of a Ford Foundation Fellowship, the NSF CAREER award, MIF Career Award, and several teaching and advising awards, including the Sonny Yau award for teaching excellence and the Zellman Warhaft Commitment to Diversity Award. Jamol's research focuses on how to disseminate information to customers in queues and how this information affects the underlying dynamics in queues. He is very interested in the interplay between stochastic processes, simulation, and non-linear dynamics. Jamol is heavily involved in the Applied Probability Society of INFORMS, where he served as the INFORMS APS 2019 conference co-chair.

Amarjit Budhiraja

University of North Carolina at Chapel Hill

Near Equilibrium Fluctuations for Certain Supermarket Models (Slides)

Date: Aug 30, 2021



More info.

Abstract

We consider the supermarket model in the usual Markovian setting where jobs arrive at a rate that scales with the size of the system. An arriving job joins the shortest among d randomly selected service queues. The goal is to study fluctuations of the state process about its near equilibrium state in the critical regime, when the number of selections increase with system size. We characterize the different types of possible behavior depending on the manner in which the system approaches criticality and the number of selections increase with system size. In particular, we identify three canonical regimes in which fluctuations of the state process about its near equilibrium are of the order square-root of the system size and are governed asymptotically by a one-dimensional Brownian motion. The forms of the limit processes in the three regimes are quite different; in the first case we get a linear diffusion; in the second case we get a diffusion with an exponential drift; and in the third case we obtain a reflected diffusion in a half space. This is joint work with Shankar Bhamidi and Miheer Dewaskar.

Bio

Budhiraja is a Professor of Statistics and Operations Research at the University of North Carolina at Chapel Hill. His research interests include stochastic networks, stochastic control, stochastic differential games, theory of large deviations, nonlinear filtering, and stochastic dynamical systems. More details can be found at https://abudhiraja.web.unc.edu.

Mor Armony

New York University

Hospitalization versus Home Care: Balancing Mortality and Infection Risks for Hematology Patients (Slides)

Date: Sep 13, 2021



More info.

Abstract

Previous research has shown that early discharge of patients may hurt their medical outcomes. However, in many cases, the "optimal" length of stay (LOS) and the best location for treatment of the patient are not obvious. A case in point is hematology patients, for whom this is a critical decision. These patients are hospitalized on a regular basis for chemotherapy treatments and it is debated whether following these treatments the patients should stay at the hospital for an observation period or be sent home instead. Patients with hematological malignancies are susceptible to life-threatening infections after chemotherapy. Hence, LOS optimization for hematology patients must balance the risks of patient infection and mortality. The former is reduced by minimizing hospital stay, while the latter is reduced by maximizing hospital stay, whereby infections can be identified and treated earlier. We develop a Markov decision process formulation to explore the impact of the infection and mortality risks on the optimal LOS from a single-patient perspective. We further consider the social optimization problem in which capacity constraints limit the ability of hospitals to keep patients for the entirety of their optimal LOS. We find that the optimal policy under this constraint takes the form of a two-threshold policy. This policy may block some patients and immediately route them to home care, or speed up some patients' LOS and send them to home care early after an observation period in the hospital. Joint work with Prof. Galit Yom-Tov from the Technion.

Bio

Mor Armony is the Vice Dean of Faculty, Harvey Golub Professor of Business Leadership, and Professor of Technology, Operations & Statistics at New York University Stern School of Business. Professor Armony teaches courses in operations management and in service operations.

Professor Armony's primary research areas of interest include management of patient flow in healthcare, optimization of customer experience in contact centers, and general stochastic modeling of various operations. Her articles have appeared in numerous publications, including Management Science, Operations Research, and Queueing Systems.

Before joining NYU Stern, Professor Armony served as a consultant for Lucent Technologies and for AT&T. She also developed mathematical models for the prediction of financial indexes at Eventus, Israel.

Professor Armony received her Bachelor of Science in mathematics and statistics and her Master of Science in statistics from the Hebrew University of Jerusalem. She also received a Master of Science and a Doctor of Philosophy in engineering-economic systems and operations research from Stanford University.

Charles Bordenave

Institut de Mathématiques de Marseille

Entropy of processes on infinite trees (slides)

Date: Sep 20, 2021



More info.

Abstract

This is a joint work with Agnes Backhausz et Balasz Szegedy. We define a natural notion of micro-state entropy associated to a random process on a unimodular random tree. This entropy is closely related to Bowen's sofic entropy in dynamical systems. It allows to compute the asymptotic free energy of factor models on random graphs and it gives a variational framework to solve many combinatorial optimization problems on random graphs. We give a formula for this entropy for a large class of processes.

Bio

Charles Bordenave is a CNRS Research Director at the Aix-Marseille University. He obtained a PhD in 2006 from Ecole Polytechnique under the supervision of François Baccelli. His current research interests include random matrices, random graphs, random walks, operator algebra and their applications. He currently serves as Associate Editor of Annals of Probability, Annals of Applied Probability, Bernoulli Journal and Annales de la Faculté des Sciences de Toulouse. He was awarded the Marc Yor Prize in 2017 and was IMS Medallion Lecturer in 2017. More details can be found at http://www.i2m.univ-amu.fr/perso/charles.bordenave/.

Weina Wang

Carnegie Mellon University

Sharp Waiting-Time Bounds for Multiserver Jobs (slides)

Date: Sep 27, 2021



More info.

Abstract

Multiserver jobs, which are jobs that occupy multiple servers simultaneously during service, are prevalent in today's computing clusters. However, little is known about the delay performance of systems with multiserver jobs. In this talk, to capture the large scale of modern computing clusters, we consider the scheduling problem of multiserver jobs in systems where the total number of servers becomes large. The multi-server job model opens up new scaling regimes where both the number of servers that a job needs and the system load scale with the total number of servers. Within these scaling regimes, we first characterize when the queueing probability diminishes as the system becomes large, and then turn our focus to the mean waiting time, i.e., the time a job spends in queueing rather than in service. In particular, we derive order-wise sharp bounds on the mean waiting time under various policies. The sharpness of the bounds allows us to establish the asymptotic optimality of a priority policy that we call P-Priority, and the strict suboptimality of the commonly used First-Come-First-Serve (FCFS) policy. This talk is based on joint works with Yige Hong, Qiaomin Xie, and Mor Harchol-Balter.

Bio

Weina Wang is an Assistant Professor in the Computer Science Department at Carnegie Mellon University. Her research lies in the broad area of applied probability and stochastic systems, with applications in resource orchestration in large computing systems, data centers, and privacy-preserving data analytics. She was a joint postdoctoral research associate in the Coordinated Science Lab at the University of Illinois at Urbana-Champaign, and in the School of ECEE at Arizona State University, from 2016 to 2018. She received her Ph.D. degree in Electrical Engineering from Arizona State University in 2016, and her Bachelor's degree from the Department of Electronic Engineering at Tsinghua University in 2009. Her dissertation received the Dean’s Dissertation Award in the Ira A. Fulton Schools of Engineering at Arizona State University in 2016. She received the Kenneth C. Sevcik Outstanding Student Paper Award at ACM SIGMETRICS 2016.

Philippe Robert

INRIA

Stochastic Models of Neural Synaptic Plasticity (slides)

Date: Oct 04, 2021



More info.

Abstract

In neuroscience, learning and memory are usually associated with long-term changes of neuronal connectivity. Synaptic plasticity refers to the set of mechanisms driving the dynamics of neuronal connections, called synapses and represented by a scalar value, the synaptic weight. Spike-Timing Dependent Plasticity (STDP) is a biologically-based model representing the time evolution of the synaptic weight as a functional of the past spiking activity of adjacent neurons.


In this talk we present a new, general, mathematical framework to study synaptic plasticity associated with different STDP rules. The system composed of two neurons connected by a single synapse is investigated and a stochastic process describing its dynamical behavior is presented and analyzed. We show that a large number of STDP rules from neuroscience and physics can be represented by this formalism. Several aspects of these models are discussed and compared to canonical models of computational neuroscience. An important sub-class of plasticity kernels with a Markovian formulation is also defined and investigated via averaging principles. Joint work with Gaëtan Vignoud.

Bio

Philippe Robert is Research Director at INRIA. He received his PhD from the University "Sorbonne Universite" in Paris. His research interests include theoretical aspects of stochastic networks, random algorithms and scaling methods of Markov processes. Stochastic models of molecular biology are currently the main applications of his research. He is teaching at the University "Sorbonne Universite" in Paris. He wrote a book "Stochastic Networks and Queues" in 2003 published by Springer-Verlag New-York. https://www-rocq.inria.fr/who/Philippe.Robert

Yash Kanoria

Columbia Business School

Dynamic Spatial Matching (link to working paper) (Slides)

Date: Oct 11, 2021



More info.

Abstract

Motivated by a variety of online matching markets, we consider demand and supply which arise i.i.d. uniformly in [0,1]^d, and need to be matched with each other while minimizing the expected average distance between matched pairs (the "cost"). We characterize the achievable cost in three models as a function of the dimension d and the amount of excess supply (M or m): (i) Static matching of N demand units with N+M supply units. (ii) A semi-dynamic model where N+M supply units are present beforehand and N demand units arrive sequentially and must be matched immediately. (iii) A fully dynamic model where there are always m supply units present in the system, one supply and one demand unit arrive in each period, and the demand unit must be matched immediately. We show that cost nearly as small as the distance to the nearest neighbor is achievable in all cases *except* models (i) and (ii) for d=1 and M = o(N). Moreover, the latter is the only case in models (i) and (ii) where excess supply significantly reduces the achievable cost.

Bio

Yash Kanoria is an Associate Professor of Business in the Decision, Risk and Operations division at Columbia Business School, working on the design and operations of marketplaces, especially matching markets. Previously, he obtained a BTech in Electrical Engineering from IIT Bombay in 2007, a PhD from Stanford in 2012, and spent a year at Microsoft Research New England during 2012-13 as a Schramm postdoctoral fellow. He received a National Science Foundation CAREER Award in 2017. He was a finalist for the 2018 Wagner Prize in Operations Research practice for the design of a large-scale centralized seat allocation process for engineering colleges in India.

Sandeep Juneja

Tata Institute of Fundamental Research

Stochastic Multi Armed Bandits and Heavy Tails (slides)

Date: Oct 18, 2021



More info.

Abstract

In this talk we consider two classic and popular problems in the stochastic multi armed bandit settings. 1) Regret minimization, where given a finite set of unknown reward distributions (or arms) that can be sequentially sampled, the aim is to sample in a manner that maximizes the expected reward, or equivalently, minimizes the expected regret, over a large sampling horizon. 2) Best arm identification, where in a similar sequential setting the aim is to select the arm with the largest mean using a minimum number of samples on average while keeping the probability of false selection to a pre-specified and small value. Both these problems with a myriad of variations have been well studied in the literature. The analysis techniques for the two problems are typically different and typically the arm distributions are restricted to a small class such as a single parameter exponential family or distributions that are bounded with known bounds. In practice, such restrictions often do not hold and the arm distributions may even be heavy-tailed. In this talk we discuss how to optimally solve both the above problems when minimal restrictions are imposed on the arm distributions. Further, we highlight the commonalities of techniques in solving the two problems.

Bio

Sandeep is a senior professor at the School of Technology and Computer Science in Tata Institute of Fundamental Research in Mumbai. He received his Ph.D. in Operations Research from Stanford University. Thereafter he worked for a financial credit insurance company and then a management consulting firm before joining academia. His research interests lie in applied probability including sequential learning, financial mathematics, Monte Carlo methods, and game theoretic analysis of queues. Lately, he has been involved in modeling Covid-19 spread in Mumbai and in the mathematics of epidemiological simulation models. He is currently on the editorial board of Stochastic Systems. Earlier he has been on editorial boards of Mathematics of Operations Research, Management Science, and ACM TOMACS.

Kristen S. Gardner

Amherst College

A General "Power-of-d" Dispatching Framework for Heterogeneous Systems (slides)

Date: Nov 8, 2021



More info.

Abstract

Intelligent dispatching is crucial to obtaining low response times in large-scale systems. The bulk of "power-of-d" policies studied in the literature assume that the system is homogeneous, meaning that all servers have the same speed; meanwhile, real-world systems often exhibit server speed heterogeneity. We introduce a general framework for describing and analyzing heterogeneity-aware power-of-d policies. The key idea behind our framework is that dispatching policies can make use of server speed information at two decision points: when choosing which d servers to query, and when assigning a job to one of those servers. Our framework explicitly separates the dispatching policy into a querying rule and an assignment rule; we consider general families of both rule types. In this talk, we will focus on heterogeneity-aware assignment rules that ignore queue length information beyond idleness status. In this setting, we analyze mean response time and formulate novel optimization problems for the joint optimization of querying and assignment. We build upon our optimized policies to develop heuristic queue length-aware dispatching policies. We will also discuss extensions, our ongoing work, and open problems. Based on joint work with Jazeem Abdul Jaleel, Sherwin Doroudi, and Alexander Wickeham.

Bio

Kristy Gardner is an Assistant Professor in the Computer Science Department at Amherst College. Her work primarily focuses on designing and analyzing dispatching policies for large-scale systems. She received her M.S. and PhD in 2015 and 2017 respectively, both from Carnegie Mellon University, and her B.A. in 2012 from Amherst College.

Sewoong Oh

University of Washington

Differential Privacy Meets Robust Statistics (slides)

Date: Nov 15, 2021



More info.

Abstract

Consider a scenario where we are training a model or performing statistical analyses on a shared dataset with entries collected from several contributing individuals. Differential privacy provides protection against membership inference attacks that try to reveal sensitive information in the dataset. Robust estimators provide protection against data poisoning attacks where malicious contributors inject corrupted data. Even though both types of attacks are powerful and easy to launch in practice, there is no algorithm providing protection against both simultaneously. In the first half of this talk, I will present the first efficient algorithm that guarantees both differential privacy and robustness to the corruption of a fraction of the data. I will focus on the canonical problem of mean estimation, which is a critical building block in many algorithms including stochastic gradient descent for training deep neural networks. In the second half of this talk, I will present a new framework that bridges differential privacy and robust statistics, which we call High-dimensional Propose-Test-Release (HPTR). This is a computationally intractable approach, but universally applicable to several statistical estimation problems including mean estimation, linear regression, covariance estimation, and principal component analysis. In most of these cases, HPTR achieves a near-optimal sample complexity by exploiting robust statistics in the algorithm, thus characterizing the minimax error rate of the corresponding private estimation problems for the first time. This talk is based on two recent papers (https://arxiv.org/abs/2104.11315 and https://arxiv.org/abs/2102.09159) and an unpublished ongoing work.

Bio

Sewoong Oh is an Associate Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Previous to joining University of Washington in 2019, he was an Assistant Professor in the department of Industrial and Enterprise Systems Engineering at University of Illinois at Urbana-Champaign since 2012. He received his PhD from the department of Electrical Engineering at Stanford University in 2011, under the supervision of Andrea Montanari. Following his PhD, he worked as a postdoctoral researcher at Laboratory for Information and Decision Systems (LIDS) at MIT, under the supervision of Devavrat Shah.

Sewoong's research interest is in machine learning in topics including theory of deep learning, robust estimators, meta-learning, generative adversarial networks, and differential privacy. He was co-awarded the ACM SIGMETRICS best paper award in 2015, NSF CAREER award in 2016, ACM SIGMETRICS rising star award in 2017, and GOOGLE Faculty Research Awards in 2017 and 2020.

Debankur Mukherjee

Georgia Institute of Technology

Coupling approach to mean-field analysis of structurally constrained large-scale systems (slides)

Date: Nov 22, 2021



More info.

Abstract

In this talk, we will discuss some recent progress on the mean-field analysis of structurally constrained large-scale stochastic systems in the context of load balancing in parallel-server systems. The state-of-the-art load balancing heuristics are predominantly based on full-flexibility models in which any task can be assigned to any server. The classical mean-field approximation has been proven to be highly accurate in this case. However, such systems are infeasible due to their overwhelming implementation complexity and prohibitive storage capacity requirement at the servers due to data locality. Thus, one typically needs to design “sparse” systems where each server can process only a small subset of the possible task types, and the load balancing algorithm is restricted to assign an incoming task to one of the servers “compatible” to its type.

Empirically, performance of popular load balancing algorithms on systems with limited flexibility can be observed to degrade arbitrarily. A rigorous analysis goes beyond the scope of state-of-the-art mean-field techniques. This is mainly because the individual queue length processes become non-exchangeable and the system lacks a Markovian characterization that is an aggregate of a large number of individual processes. In this talk, we will describe how coupling techniques can be exploited to identify a broad class of sparse systems, characterized in terms of a collection of sparse compatibility graphs, that enjoy the performance benefits of a fully flexible system asymptotically in the large-system limit and for which the mean-field approximation remains valid. Thus, such a system design helps to drastically reduce the storage capacity requirement and system complexity without sacrificing the delay performance. Both process-level and steady-state asymptotics will be discussed.

Bio

Debankur Mukherjee is an Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology. He received a bachelor's in Statistics from the University of Calcutta (2012), a master's in Statistics from the Indian Statistical Institute (2014), and a Ph.D. in Stochastic Operations Research with cum laude distinction from the Eindhoven University of Technology in the Netherlands (2018). Before joining Georgia Tech in 2019, he was a Prager assistant professor in the Division of Applied Mathematics at Brown University. Debankur’s research spans the area of applied probability, at the interface of stochastic processes and computer science, with applications to performance analysis, online algorithms, and machine learning. His primary focus is to develop a foundational understanding of the challenges that arise in large-scale systems, such as data centers and cloud networks. His research has been funded by the NSF. He received the Best Student Paper Award at ACM SIGMETRICS 2018.

Andrew M. Stuart

California Institute of Technology

Optimization And Sampling Without Derivatives (slides)

Date: Nov 29, 2021



More info.

Abstract

Many inverse problems arising in applications may be cast as optimization or sampling problems in which the parameter-to-data map is provided as a black-box, derivatives may not be readily available and the evaluation of the map itself may be subject to noise. I will describe the derivation of mean-field (stochastic) dynamical systems which address such problems and show how particle approximations lead to derivative-free algorithms. I will overview some of the analysis of the resulting methods, link the work to parallel developments in consensus-based optimization, and describe open problems. The work will be illustrated throughout by examples from the physical sciences.

Bio

Andrew Stuart has research interests in applied and computational mathematics, and is interested in particular in the question of how to optimally combine complex mechanistic models with data. He joined Caltech in 2016 as Bren Professor of Computing and Mathematical Sciences, after 17 years as Professor of Mathematics at the University of Warwick (1999--2016). Prior to that he was on the faculty in The Departments of Computer Science and Mechanical Engineering at Stanford University (1992--1999), and in the Mathematics Department at Bath University (1989--1992). He obtained his PhD from the Oxford University Computing Laboratory in 1986, and held postdoctoral positions in Mathematics at Oxford University and at MIT in the period 1986--1989.