Titles, Abstracts, and Slides

Please click on the title for details

From HeartSteps to HeartBeats: Personalized Decision-making 

Speaker: Raaz Dwivedi, Cornell Tech

Abstract: 

Ever-increasing access to data and computational power allows us to make decisions that are personalized to users by taking their behaviors and contexts into account. These developments are especially useful in domains like mobile health and medicine. For effective personalized decision-making, we need to revisit two fundamental tasks: (1) estimation and inference from data when there is no model for a decision’s effect on a user and (2) simulations when there is a known model for a decision’s effect on a user. Here we must overcome the difficulties facing classical approaches, namely statistical biases due to adaptively collected data and computational bottlenecks caused by high-dimensional models. This talk addresses both tasks. First, I provide a nearest-neighbor approach for unit-level statistical inference in sequential experiments. I also introduce a doubly robust variant of nearest neighbors that provides sharp error guarantees and helps measure a mobile app's effectiveness in promoting healthier lifestyle with limited data. For the second task, I introduce kernel thinning, a practical strategy that provides near-optimal distribution compression in near-linear time. This method yields significant computational savings when simulating models of cardiac functioning. 

About the speaker:

Raaz Dwivedi is an Assistant Professor at Cornell Tech, and Cornell Operations Research and Information Engineering (ORIE). Prior to joining Cornell, he was a FODSI postdoc fellow at Harvard and MIT, advised by Prof. Susan Murphy and Prof. Devavrat Shah, respectively. He earned his Ph. D. at EECS, UC Berkeley, advised by Prof. Martin Wainwright and Prof. Bin Yu; and his bachelors degree at EE, IIT Bombay, advised by Prof. Vivek Borkar. His research builds statistically and computationally efficient strategies for personalized decision-making with theory and methods spanning the areas of causal inference, reinforcement learning, random sampling, and high-dimensional statistics. He won the President of India Gold Medal at IIT Bombay, the Berkeley Fellowship, teaching awards at UC Berkeley and Harvard, and a best student paper award for his work on optimal compression. 


RD.pdf

The moment-SOS hierarchy and the Christoffel function

Speaker: Jean-Bernard Lasserre, LAAS-CNRS 

Abstract: 

I will briefly describe the Moment-SOS hierarchy, a methodology initially devoted to solving polynomial optimization problems. However it also applies to solve the Generalized Moment Problem (GMP) whose list of important applications in many areas of Science & Engineering is almost endless (optimization being its simplest instance). When applied for solving optimal control problems (as well as some nonlinear PDEs), the output the semidefinite relaxation at step « n » of the Moment-SOS hierarchy, approximates the vector of moments (up to degree « 2n ») of the measure supported on the graph of optimal state and control trajectories. Based on this knowledge it still remains to recover such optimal trajectories. 

The second part of the talk is devoted to the Christoffel function ((CF), a tool from the theory of approximation and orthogonal polynomials. We will describe how a non-standard application of the CF, allows to recover a function from the sole knowledge of moments of the measure supported on its graph; this is precisely what we need to recover state and control trajectories of OCPs from optimal (moment) solutions in the Moment-SOS hierarchy! Moreover we will also describe connections of the CF with seemingly-unrelated domains (like e.g. equilibrium measures of compact set, certificates of positivity, optimization).

About the speaker:

Jean-Bernard Lasserre graduated from "Ecole Nationale Supérieure d'Informatique et Mathematiques Appliquees" (ENSIMAG) in Grenoble,  then got his PhD (1978) and "Doctorat d'Etat" (1984) degrees both from Paul Sabatier University in Toulouse (France). He has been at LAAS-CNRS in Toulouse since 1980, where he is currently Directeur de Recherche éméritus. He is also a member of IMT, the Institute of Mathematics of Toulouse, and hold the ``Polynomial Optimization" chair at the ANITI Institute (one of the four recently created Artificial Intelligence Institutes in France). Some of his past and present research activities include Machine Learning, applied mathematics, control and non-linear PDEs, probability,  Markov control processes, approximation theory & convex optimization, production planning & scheduling. He has been a visiting faculty at many prestigious universities like UCB, MIT, UCLA, Leiden University, Fields Institute and Stanford. He has also been conferred with various awards, namely, Lagrange Prize in continuous optimization by SIAM, John von Neumann Theory prize and L. Khachiyan Prize by INFORMS society and recently, Grand Prix INRIA-Académie des Sciences.  

JBL.pdf

Playing in the Dark: No-regret Learning with Adversarial Constraints 

Speaker: Abhishek Sinha, TIFR 

Abstract: 

We study a generalization of the classic Online Convex Optimization (OCO) framework by considering additional long-term adversarial constraints. Specifically, after an online policy decides its action on a round, in addition to a convex cost function, the adversary also reveals a set of k convex constraints. The cost and the constraint functions could change arbitrarily with time, and no information about the future functions is assumed to be available. In this talk, we will propose a meta-policy that simultaneously achieves a sublinear cumulative constraint violation and a sublinear regret. This is achieved via a black box reduction of the constrained problem to the standard OCO problem for a recursively constructed sequence of surrogate cost functions. We show that optimal performance bounds can be achieved by solving the surrogate problem using any adaptive OCO policy enjoying a standard data-dependent regret bound. A new Lyapunov-based proof technique is presented that reveals a connection between regret and certain sequential inequalities through a novel decomposition result. We will conclude the talk by highlighting applications to online multi-task learning and network control problems.


Joint work with Rahul Vaze (TIFR). (Preprint: https://arxiv.org/pdf/2310.18955.pdf)

About the speaker:

Abhishek Sinha is currently a faculty member in the School of Technology and Computer Science at the Tata Institute of Fundamental Research, Mumbai, India. Prior to joining TIFR, he had been on the faculty of the Dept. of Electrical Engineering at the Indian Institute of Technology Madras. He received his Ph.D. from the Massachusetts Institute of Technology, USA, where he was affiliated with the Laboratory for Information and Decision Systems. Thereafter, Abhishek worked as a senior engineer at Qualcomm Research, San Diego. He obtained his M.E. degree in Telecommunication Engg. from the Indian Institute of Science, Bangalore, and B.E. degree in Electronics and Telecommunication Engg. from Jadavpur University, Kolkata, India. He is a recipient of the INSA Medal for Young Scientists (2021), Best Paper Awards in INFOCOM 2018 and MobiHoc 2016, and Jagadis Bose National Science Talent Search (JBNSTS) scholarship (2006), Kolkata, India. His areas of interest include theoretical machine learning, networks, and information theory. 

AS.pdf

Multivariate Nonparametric regression using mixed partial derivatives 

Speaker: Aditya Guntuboyina, UCB

Abstract: 

I will describe some recent methods for multivariate nonparametric estimation based on constraining mixed partial derivatives. The resulting estimators are efficiently computable and work well in practice. They are also theoretically attractive, achieving rates of convergence that avoid the usual curse of dimensionality. Specific examples of this methodology include: Hardy-Krause variation constrained regression, MARS via LASSO, Entirely Monotonic regression and totally convex regression.  

About the speaker:

Aditya Guntuboyina received the B.Stat. and M.Stat. degrees from the Indian Statistical Institute in 2004 and 2006, respectively, and the Ph.D. degree in Statistics from Yale University in 2011. He was a Post-Doctoral researcher at the University of Pennsylvania from July 2011 to December 2011. He joined the Department of Statistics at the University of California, Berkeley in January 2012 as an Assistant Professor, where he is currently an Associate Professor. His current research interests include nonparametric function estimation, Bayesian Statistics and Empirical Bayesian methodology. He received the NSF CAREER award in 2017. 

AG.pdf

Adaptive linear estimating equations

Speaker: Kaulik Khamaru, UCB

Abstract: 

Sequential data collection has emerged as a widely adopted technique for enhancing the efficiency of data-gathering processes. Despite its advantages, such a data collection mechanism often introduces complexities to the statistical inference procedure. For instance, the ordinary least squares (OLS) estimator in an adaptive linear regression model can exhibit non-normal asymptotic behavior, posing challenges for accurate inference and interpretation. There are two predominant ways to tackle this issue: a) non-asymptotically valid inference using Martingale—concentration inequalities, and b) asymptotically valid inference via asymptotic normality.  

In this work, we propose a general method for constructing confidence regions that connects these two helpful paradigms. In other words, the proposed confidence regions are valid non-asymptotically, and asymptotically, they produce Gaussian-like confidence regions. 

About the speaker:

Kaulik Khamaru is an Assistant Professor of Statistics at Rutgers University. He completed his PhD from University of California, Berkeley under the guidance of Professor Martin J. Wainwright and Professor Michael I. Jordan. He pursued his Undergraduate and Masters in Statistics from Indian Statistical Institute, Kolkata. In 2019, he worked with Professor Dean Foster for the summer at Amazon. In 2020, he was a part of the Berkeley-BAIR collaboration and worked under Professor Lester Mackey. His research interests spans theory and application of statistics, machine learning and optimization. Specific areas include EM algorithm, Gaussian mixture models, model mis-specification, factor analysis, reinforcement learning, inference in sequential environments, non-convex optimization. 

KK.pdf

A General Framework for Optimal Data-Driven Optimization and Learning

Speaker: Tobias Stutter, University of Konstanz  

Abstract: 

In the setting of data-driven stochastic programming, the decision-maker cannot observe the distribution of its exogenous uncertainties but rather has access to a finite set of data generated by a stationary process. We advance here a novel framework which allows us to study how to transform data into optimal decisions. We characterize the optimal decision as the least conservative decision that guarantees an exponential decay of its corresponding out-of-sample disappointment, where the out-of-sample disappointment quantifies the probability that the actual expected cost of the proposed decision exceeds its predicted cost. We show that under certain mild technical assumptions closely related to the existence of a sufficient statistic satisfying a large deviation principle, the optimal decision can be constructed via a separation into an estimation and a subsequent robust optimization phase. The corresponding robust optimization scheme relies on an ambiguity set that is induced by the large deviation rate function of the estimator used and hence the underlying data-generating process. The presented framework is the first to attempt rating the zoo of ambiguity sets derived in the recent literature in terms of statistical power. We also discuss how to numerically solve some of the introduced distributionally robust optimization problems. 

In the second part of this talk, the presented framework will be modified to with the goal to develop a systematic construction of highly accurate confidence intervals by using a moderate deviation principle-based approach. We propose confidence intervals that are statistically optimal in the sense that they satisfy criteria regarding exponential accuracy, minimality, consistency, mischaracterization probability, and eventual uniformly most accurate (UMA) property. The confidence intervals suggested by this approach are expressed as solutions to robust optimization problems, where the uncertainty is expressed via the underlying moderate deviation rate function induced by the data-generating process. 

The talk is based on two preprints (https://arxiv.org/pdf/2010.06606.pdf and https://arxiv.org/pdf/2305.14496.pdf)

About the speaker:

Tobias Sutter received the B.Sc. and M.Sc. degrees in mechanical engineering from ETH Zürich, Zürich, Switzerland, in 2010 and 2012, respectively, and the Ph.D. degree in electrical engineering from the Automatic Control Laboratory, ETH Zürich, in 2017. He currently is an Assistant Professor with the Computer Science Department, Konstanz, Germany. Prior to joining University of Konstanz, he held a research and Lecturer appointment with EPFL with the Chair of Risk Analytics and Optimization and with the Institute of Machine Learning, ETH Zürich. His research interests revolve around control, reinforcement learning and data-driven robust optimization. Tobias Sutter was a recipient of the 2016 George S. Axelby Outstanding Paper Award from the IEEE Control Systems Society and received the ETH Medal for his outstanding Ph.D. thesis on approximate dynamic programming in 2018.  

TS.pdf

From Optimization to Control: An Algorithmic Perspective

Speaker: Peyman Mohajerin Esfahani, TU Delft

Abstract: 

In this talk, we first introduce three different research questions related to data-driven decision-making where convexity turns out to play a significant role. We then focus on one of those on real-time implementation and further discuss recent algorithmic developments therein. In the second part, we draw an explicit analogy across four problem classes in optimization and control with a unified solution characterization. This viewpoint allows for a systematic transformation of algorithms from one domain to the other. With this in mind, we exploit two linear structural constraints specific to control problems with finite state-action pairs to approximate the Hessian in a second-order-type algorithm from optimization. This leads to novel first-order control algorithms with the same computational complexity as (model-based) value iteration and (model-free) Q-learning while they interestingly exhibit an empirical convergence behavior similar to (model-based) policy iteration and (model-free) Zap Q-learning with very low sensitivity to the discount factor.

 

This talk is based on the following three works: arXiv:2205.00446, arXiv2008.10362, arXiv:2311.11166.

About the speaker:

Peyman Mohajerin Esfahani is an associate professor at the Delft Center for Systems and Control. He joined TU Delft in October 2016 as an assistant professor. Prior to that, he held several research appointments at EPFL, ETH Zurich, and MIT between 2014 and 2016. He received the BSc and MSc degrees from Sharif University of Technology, Iran, and the PhD degree from ETH Zurich. He currently serves as an associate editor of Operations Research, Transactions on Automatic Control, and Open Journal of Mathematical Optimization. He was one of the three finalists for the Young Researcher Prize in Continuous Optimization awarded by the Mathematical Optimization Society in 2016, and a recipient of the 2016 George S. Axelby Outstanding Paper Award from the IEEE Control Systems Society. He received the ERC Starting Grant and the INFORMS Frederick W. Lanchester Prize in 2020. He is the recipient of the 2022 European Control Award. 

PME1.pdf
PME2.pdf