Date: April 20, 2026
Speaker:
Yale University
In the planted subgraph model, a statistician observes the union of an unknown copy of a fixed "planted" subgraph H and an independent instance of Erdős–Rényi noise G(n,p), with the goal of recovering the location of H from the observed graph. This framework generalizes several classical high-dimensional estimation models, including the planted clique model (Jerrum, 1992), obtained when H is a k-clique, and the planted matching model (Cherktov et al., 2010), when H is a perfect matching.
In this talk, we present new results characterizing the limiting minimum mean squared error (MMSE) for all subgraphs H satisfying a mild density condition. Our unified analysis reveals a rich structure: the limiting MMSE takes a staircase form, with discontinuities determined (up to 1+o(1) error) by natural variants of the subgraph expectation threshold of H, introduced by Kahn and Kalai (2004). Interestingly, our work establishes for the first time a fundamental connection between this combinatorial threshold—long central to probabilistic combinatorics—and the fundamental limits of statistical estimation.
Ilias Zadik is Assistant Professor of Statistics and Data Science at Yale University. His research mainly focuses on the mathematical theory of statistics and its many connections with other fields such as computer science, probability theory, and statistical physics. Prior to Yale, he held postdoctoral positions at MIT and NYU. He received his PhD from MIT in 2019.
Date: April 13, 2026
Speaker:
University of Amsterdam
A price-setting seller offers q units of inventory over a finite selling horizon of length T. Prices are set either according to the optimal static policy, or the optimal dynamic policy (a la Gallego and Van Ryzin, 1994). As T grows large, what is the distribution of the time it takes to sell k items? Is static pricing systematically slower or faster than dynamic pricing? And how does this all depend on characteristics of the willingness-to-pay distribution? In this joint work with Tarek Abdallah of Northwestern Univ., we address these questions.
Arnoud den Boer is associate professor at the Korteweg-de Vries Institute for Mathematics at the University of Amsterdam. His research focuses on various aspects of pricing and assortment optimization problems, such as demand learning, waste reduction, and collusion. Website: https://sites.google.com/view/arnoud-v-den-boer/home
Date: March 30, 2026
Speaker:
Stanford University
In many randomized experiments, the units being studied influence one another. A medication for a contagious disease given to one person may protect those they interact with. When such interactions are present, standard estimators are biased. This problem has received significant recent attention, though most approaches require knowledge of the interaction network. We study the setting where this network is unobserved. Drawing on ideas from statistical physics, we show that treatment effects propagate through networks following stable distributional dynamics. The key requirement is observing outcomes over time as treatments vary; this temporal dimension allows us to detect distributional shifts and estimate counterfactual trajectories without reconstructing the network. We validate the framework in three real experiments (a public health trial and two online experiments) and a simulated social network of interacting AI agents. In each case, the method produces estimates consistent with network-informed approaches. We also discuss limitations and settings where the approach falls short.
Mohsen Bayati is the Carl and Marilynn Thoma Professor of Operations, Information and Technology at the Stanford Graduate School of Business. His research focuses on data-driven decision-making, experiment design, and the safe deployment of AI, including balancing automation with safety in high-stakes domains such as healthcare. He utilizes tools from multi-armed bandits, message-passing algorithms, and high-dimensional statistics. Mohsen received a BS in Mathematics from Sharif University of Technology and a PhD in Electrical Engineering from Stanford University. He then worked as a postdoctoral researcher at Microsoft Research and Stanford University. His work has been recognized with the INFORMS Healthcare Applications Society's Best Paper (Pierskalla) Award in 2014 and 2016, the INFORMS Applied Probability Society's Best Paper Award in 2015, the National Science Foundation CAREER Award, and the PhD Faculty Distinguished Service Award at the Stanford Graduate School of Business.
Date: March 9, 2026
Speaker:
ETH Zurich
In many game-theoretic settings, agents are challenged with taking decisions against the uncertain behavior exhibited by others. Often, this uncertainty arises from multiple sources, e.g., incomplete information, limited computation, bounded rationality. While it may be possible to guide the agents' decisions by modeling each source, their joint presence makes this task particularly daunting. Toward this goal, it is natural for agents to seek protection against deviations around the emergent behavior itself, which is ultimately impacted by all the above sources of uncertainty. To do so, we propose that each agent takes decisions in face of the worst-case behavior contained in an ambiguity set of tunable size, centered at the emergent behavior so implicitly defined. This gives rise to a novel equilibrium notion, which we call strategically robust equilibrium. Building on its definition, we show that, when judiciously operationalized via optimal transport, strategically robust equilibria (i) are guaranteed to exist under the same assumptions required for Nash equilibria; (ii) interpolate between Nash and security strategies; (iii) come at no additional computational cost compared to Nash equilibria. Through a variety of experiments, including bi-matrix games, congestion games, and Cournot competition, we show that strategic robustness protects against uncertainty in the opponents' behavior and, surprisingly, often results in higher equilibrium payoffs - an effect we refer to as coordination via robustification.
Florian Dörfler is a Professor at the Automatic Control Laboratory at ETH Zürich. He received his Ph.D. degree in Mechanical Engineering from the University of California at Santa Barbara in 2013, and a Diplom degree in Engineering Cybernetics from the University of Stuttgart in 2008. From 2013 to 2014 he was an Assistant Professor at the University of California Los Angeles. He has been serving as the Associate Head of the ETH Zürich Department of Information Technology and Electrical Engineering from 2021 until 2022. His research interests are centered around automatic control, system theory, optimization, and learning. His particular foci are on network systems, data-driven settings, and applications to power systems. He is a recipient of the 2025 Rössler Prize, the highest scientific award at ETH Zürich across all disciplines, as well as the distinguished career awards by IFAC (Manfred Thoma Medal 2020) and EUCA (European Control Award 2020). He and his team received best paper distinctions in the top venues of control, machine learning, power systems, power electronics, circuits and systems. They were recipients of the 2011 O. Hugo Schuck Best Paper Award, the 2012-2014 Automatica Best Paper Award, the 2016 IEEE Circuits and Systems Guillemin-Cauer Best Paper Award, the 2022 IEEE Transactions on Power Electronics Prize Paper Award, the 2024 Control Systems Magazine Outstanding Paper Award, and multiple Best PhD thesis awards at UC Santa Barbara and ETH Zürich. They were further winners or finalists for Best Student Paper awards at the European Control Conference (2013, 2019), the American Control Conference (2010,2016,2024), the Conference on Decision and Control (2020), the PES General Meeting (2020), the PES PowerTech Conference (2017,2025), the International Conference on Intelligent Transportation Systems (2021), the IEEE CSS Swiss Chapter Young Author Best Journal Paper Award (2022,2024,2025), the IFAC Conferences on Nonlinear Model Predictive Control (2024) and Cyber-Physical-Human Systems (2024), and NeurIPS Oral (2024). He is a Fellow of the IEEE and currently serving on the council of the European Control Association and as a senior editor of Automatica.
Date: March 2, 2026
Speaker:
University of Illinois at Urbana-Champaign
In this talk, I will revisit the influential work of Mitter and Newton on an information-theoretic interpretation of Bayes’ formula through the Gibbs variational principle. This formulation allowed them to pose nonlinear estimation for diffusion processes as a problem in stochastic optimal control, so that the posterior density of the signal given the observation path could be sampled by adding a drift to the signal process. I will show that this control-theoretic approach to sampling provides a common mechanism underlying several distinct problems involving diffusion processes, specifically importance sampling using Feynman–Kac averages, time reversal, and Schrödinger bridges.
Maxim Raginsky received the B.S. and M.S. degrees in 2000 and the Ph.D. degree in 2002 from Northwestern University, all in Electrical Engineering. He has held research positions with Northwestern, the University of Illinois at Urbana-Champaign (where he was a Beckman Foundation Postdoctoral Fellow from 2004 to 2007), and Duke University. In 2012, he has returned to the UIUC, where he is currently a Professor and William L. Everett Fellow with the Department of Electrical and Computer Engineering and the Coordinated Science Laboratory. He also holds a courtesy appointment with the Department of Computer Science. Prof. Raginsky's interests cover probability and stochastic processes, deterministic and stochastic control, machine learning, optimization, and information theory. Much of his recent research is motivated by fundamental questions in modeling, learning, and simulation of nonlinear dynamical systems, with applications to advanced electronics, autonomy, and artificial intelligence.
Date: February 9, 2026
Speaker:
University of Twente
Traditional social network analysis often models homophily, the tendency of similar individuals to form connections. using a single parameter. We will show that in many important applications, such as hypergraphs or temporal contact networks, homophily occurs at several different scales. We present a model that integrates these different homophily values through a random graph model with a maximum entropy approach. We demonstrate that the interaction between different levels of homophily results in complex percolation thresholds. Furthermore, we show that our model fits remarkably well on a wide range of data sets, capturing their homophily patterns accurately.
Clara Stegehuis is an associate professor at the University of Twente. She works at the intersection of probability theory, graph theory and stochastic networks, with an emphasis on asymptotic analysis, stochastic process limits, and randomized algorithms. Problems she investigates are often inspired by applications in network science, physics and computer science.