The Purdue Quantitative Methods (QM) Seminar is a weekly seminar organized by the Quantitative Methods Department at Purdue University, Daniels School of Business. The seminar invites internal and external speakers to present their research on topics at the intersection of operations research, optimization, statistics and their applications.
Purdue students and faculty may subscribe to our mailing list: qm-seminar@lists.purdue.edu (subscribe here).
If you are interested in giving a talk in our seminar, please contact Billy Jin or Alex L. Wang.
Schedules for previous semesters: Fall 2025
For Spring 2026, all seminars are Fridays, 1:30-3pm in RAWLS 2058, unless otherwise specified.
Jan 16, 1:30-3pm, RAWLS 2058
Mohit Tawarmalani (Purdue QM)
Finite Hierarchies for Disjoint Bilinear Programs
Abstract: Disjoint bilinear programs are a class of mathematical optimization problems that minimize a bilinear function over the Cartesian product of polytopes. These problems include boolean programming problems, constrained bimatrix games, two-stage linear programming, and piecewise linear concave minimization. This paper introduces novel relaxation hierarchies for such problems. At the core of these hierarchies is an algorithm that computes the barycentric coordinates of a polyhedral cone as rational, non-negative functions solving a long-standing problem in computational geometry. By combining the geometric structure of barycentric coordinates with algebraic techniques via rational functions, the hierarchies achieve the convex hull in m iterations, where m represents the number of inequalities that a subset of variables must satisfy. This framework provides the first unified approach to analyze and tighten relaxations from both disjunctive programming and the Sherali-Adams hierarchy, while ensuring finite termination. The techniques extend to concave-convex programs and facial disjunctive programs. Leveraging these methods enables the derivation of new algebraic optimality certificates and the development of a finite simplicial branch-and-bound algorithm for solving disjoint bilinear programs, demonstrating strong computational performance.
Bio: Mohit Tawarmalani is the Executive Associate Dean and Allison and Nancy Schleicher Professor at the Mitch Daniels School of Business, Purdue University. He serves as the academic director of Krenicki Center for Business Analytics and Machine Learning. Mohit Tawarmalani’s research interests are in optimization algorithm design, building energy-efficient chemical processes, designing resilient networks, and in understanding economics of business decisions. Mohit has co-authored a book on global optimization and a widely-used software, BARON. For his research, Mohit was awarded the INFORMS Computing Society Prize in 2004, the Beale-Orchard-Hays Prize from the Mathematical Programming Society in 2006, and the Computing in Chemical Engineering award from AIChE in 2024.
Mohit was one of the founding architects of Purdue’s Masters and Bachelors programs in Business Analytics and Information Management and led the team that won the 2023 INFORMS UPS George D. Smith Prize for innovative educational practices in training students to be practitioners of operations research and analytics. He also led the restructure of the Mitch Daniels School of Business in its transition from 2 to 9 departments. Mohit serves as an Associate Editor for the Journal of Global Optimization and SIAM Journal on Optimization and as an area editor for Mathematical Programming Computation.
Jan 23, 1:30-3pm, RAWLS 2058
Thanh Nguyen (Purdue QM)
A Few Good Choices
Abstract: Collective decisions often stall not because people fundamentally disagree, but because they disagree too finely. In many settings—public budgeting, policy design, or hiring—preferences are broadly aligned, yet no single option commands majority support. Every proposed winner is defeated by some other alternative, making a unique choice impossible. Our main result shows that this deadlock is self-imposed. If the decision is allowed to consist of a small set—no more than five options—majority support can always be restored (53.5%, to be exact). While choosing one outcome may be impossible, choosing a few is not. This insight mirrors common practice in data-driven and expert-guided decision-making. AI systems present short recommendation lists rather than single predictions. In nonparametric statistics, confidence sets replace fragile point estimates. Expert committees rely on shortlists instead of unanimous rankings. Our result provides a formal foundation for these practices: selecting at most five good options can deliver robustness and consensus precisely when selecting one cannot.
Joint work with Haoyu Song and Young-San Lin
Bio: Thanh Nguyen is a professor of quantitative methods whose research centers on market design and decision sciences.
Jan 30, 1:30-3pm, RAWLS 2058
Tongseok Lim (Purdue QM)
Monotone Curve Estimation and Beyond
Abstract: This talk introduces a new framework for estimating monotone principal curves using optimal transport theory, providing a smooth and continuous one-dimensional representation of multivariate data. I establish statistical guarantees for the resulting monotone curve estimator, including bounds on the expected empirical and generalized mean squared errors. Through simulation studies, I show that the proposed method improves accuracy over existing approaches when the data exhibit monotone structure. I will conclude with a brief overview of ongoing work on a new approach to high-dimensional density estimation and data fusion, based on entropic optimal transport duality and the theory of Schrödinger bridges.
Bio: Tongseok Lim is an Assistant Professor of Quantitative Methods at Purdue University’s Mitchell E. Daniels, Jr. School of Business. His research lies at the intersection of optimal transport (including martingale optimal transport) and its applications to economics, finance, and statistics, as well as variational analysis arising in geometry, physics, and data science, and Hodge-theoretic methods on graphs with connections to stochastic calculus and game theory. He has published in outlets including Annals of Probability, Bernoulli, Mathematical Programming, and SIAM journals. Before joining Purdue, he held positions at ShanghaiTech University, the Fields Institute (Fields Research Fellow), the University of Oxford, and TU Wien. He earned his PhD in Mathematics from the University of British Columbia.
Feb 6, 1:30-3pm, RAWLS 2058
William B. Haskell (Purdue SCOM)
Recursive Preferences on Information States
Abstract: We investigate history-dependent preferences over finite horizon temporal payoff lotteries. We explore how to incorporate history-dependence in a tractable way to allow practical computation. First, we axiomatize the class of history-dependent recursive preferences and its representation as the composition of time and risk aggregators. In our model, the decision-maker's (DM's) risk attitude is determined by the class of Chew-Dekel (betweenness) preferences, which is a substantial generalization of vNM risk preferences. Then, we define preferences on a parsimonious `information state' (such as wealth, satiation, or mood) which succinctly captures the history dependence. We show how the information state is endogenously determined by the original preferences, and we derive the simpler recursive representation of preferences on the information state.
Furthermore, we can extend this method to include ambiguity preferences where there are exogenous shocks. In this case, the information state has two aspects: one captures the evolving beliefs, and the other captures the changing tastes. We obtain the representation and axiomatization of these tractable preferences. We also find additional properties that strictly separate beliefs and tastes in this model (while in general they are entangled).
Bio: William B. Haskell received his B.S. Mathematics and M.S. Econometrics degrees from the University of Massachusetts Amherst in 2005 and 2006, respectively. He then obtained his M.S. Operations Research, M.A. Mathematics, and Ph.D. Operations Research degrees from the University of California Berkeley in 2007, 2010, and 2011, respectively. He is currently an Associate Professor in the Supply Chain and Operations Management Department in the Mitchell E. Daniels, Jr. School of Business at Purdue University. Dr. Haskell's research focus is on risk-aware models, algorithms for optimization and dynamic programming, and data-driven decision-making.
Feb 13, 1:30-3pm, RAWLS 2058
Andreas Neuhierl (Purdue Finance)
Benign Overfitting in Economic Forecasting via Noise Regularization
Abstract: This paper studies linear overparameterized models in economic forecasting and highlights the benefit of including predictors with no predictive power (noise variables) as a means of regularization. We consider a setting where both the outcome variable and the high-dimensional predictors are driven by a small number of latent factors and provide a theoretical justification that the linear forecast model is dense rather than sparse. We forecast the economic outcome without estimating the factors and achieve the same asymptotic accuracy as if the true latent factors were known and directly used, avoiding concerns about estimating weak factors. This is achieved by the inclusion of many noise predictors. We show that the inclusion of many noise predictors serves as regularization that shrinks the eigenvalues of the design matrix and reduces out-of-sample variance. In contrast, perfect variable selection that removes noise variables can worsen forecasts when the number of retained predictors is comparable to the sample size. Empirically, we apply this approach to forecasting U.S. inflation, international GDP growth, and the U.S. equity risk premium, finding that noise regularization improves and stabilizes predictive performance.
Joint work with Yuan Liao, Xinjie Ma and Zhentao Shi
Bio: Andreas Neuhierl is an Associate Professor of Finance at Purdue University's Mitch Daniels School of Business. He previously served on the faculty at Washington University in St. Louis and held visiting positions at the University of Chicago Booth School of Business. His research focuses on asset pricing, machine learning applications in finance, and financial econometrics. His work examines empirical asset pricing puzzles, return predictability using machine and deep learning methods, factor models, options markets, and handling missing data in financial panels. He has published in leading journals including the Journal of Financial Economics, Review of Financial Studies, and Management Science. He received his Ph.D. from Northwestern University's Kellogg School of Management.
Feb 20, 1:30-3pm, RAWLS 2058
Brian Bullins (Purdue CS)
Accelerating Optimization with Lp Norms
Abstract: In this talk, we will discuss recent advances for accelerating with Lp norms. We first present algorithms for convex optimization problems when given access to a general class of Lp proximal oracles. In addition to providing improved accelerated rates of convergence, we establish these rates are optimal for algorithms that query in the span of the outputs of the oracle, and we apply our techniques to Lp-regression as well as the settings of high-order and quasi-self-concordant optimization. We further propose a new accelerated first-order method for Lp-smooth convex optimization. Our method, which implicitly couples primal-dual iterate sequences taken with respect to differing norms, allows us in some instances to circumvent long-standing barriers in accelerated non-Euclidean steepest descent. These results are based on joint works with Deeksha Adil, Arun Jambulapati, Aaron Sidford, and Cedar Site Bai.
Bio: Brian Bullins is an assistant professor in the Department of Computer Science at Purdue University. Previously, he was a research assistant professor at the Toyota Technological Institute at Chicago, and he received his Ph.D. in computer science at Princeton University. His interests broadly lie in both the theory and practice of optimization for machine learning, and his work on improving matrix estimation techniques has led to novel higher-order methods for convex and nonconvex optimization in both sequential and distributed settings. He continues to explore how we might leverage these approaches for faster methods in highly parallel regimes and when encountering problems that exhibit non-Euclidean regularity.
Feb 27, 1:30-3pm, RAWLS 2058
Matthew Kovach (Purdue Economics)
Learning from an unknown DGP: Theory and Evidence
Abstract: We study belief revision from an unknown data-generating process (DGP). Information is given as a set of probability distributions, or general information, which extends the standard event notion while including qualitative information (A is more likely than B), interval information (A has a ten-to-twenty percent chance), and more. We behaviorally characterize Inertial Updating: the decision maker's posterior is of minimal subjective distance from her prior, given the information constraint. Further, we behaviorally characterize f-divergences, the class of distances consistent with Bayesian updating. We also present experimental evident on belief updating from AI recommendations.
Bio: Matthew Kovach is an Assistant Professor of Economics in the Mitchell E. Daniels, Jr. School of Business at Purdue University. His research focuses on decision theory and behavioral economics. Before joining Purdue, Matthew was an Assistant Professor in the Department of Economics at Virginia Tech.
March 6, 1:30-3pm, RAWLS 2058
Jimmy Zhang (Purdue IE)
Exploiting Implicit Geometry to Tame Non-smooth Optimization
Abstract: Non-smooth optimization problems are notoriously expensive to solve, suffering from an exponential gap in worst-case complexity compared to smooth optimization. Yet, these worst-case instances are rarely encountered in practice. This talk explores how we can close this gap by exploiting the unknown piecewise smooth (PWS) structure of objective functions.
First, we uncover how the bundle level method, a classic algorithm from the 1990s known for its superior empirical performance, implicitly leverages this unknown PWS structure to achieve smooth-optimization-level complexity. We then introduce the accelerated APEX method, a novel approach that achieves optimal complexity for PWS optimization.
Finally, we tackle the longstanding challenge of verifiable and efficient stopping criteria in non-smooth optimization. We present a novel $W$-certificate, generated efficiently via the bundle level approach. This certificate paves the way for adaptive algorithms that can make in-situ adjustments to exploit unknown error bound conditions.
Bio: Jimmy (Zhe) Zhang is an Assistant Professor in the School of Industrial Engineering at Purdue University. His research focuses on algorithm design for continuous and stochastic optimization problems. He received the Jarvis Student Paper Competition award at Georgia Tech and serves as an Area Chair for the 2026 INFORMS Optimization Society Conference.
Thursday, March 12, 11:30-1pm, RAWLS 3058
Jianqing Fan (Princeton Finance, Statistics, ORFE)
Distinguished Seminar: Treatment Effect Estimation
Abstract: We investigate the problem of estimating the average treatment effect (ATE) under a very general setup where the covariates can be high-dimensional, highly correlated, and can have sparse nonlinear effects on the propensity and outcome models. We present the use of a Double Deep Learning strategy for estimation, which involves combining recently developed factor-augmented deep learning-based estimators, FAST-NN, for both the response functions and propensity scores to achieve our goal. By using FAST-NN, our method can select variables that contribute to propensity and outcome models in a completely nonparametric and algorithmic manner and adaptively learn low-dimensional function structures through neural networks. Our proposed novel estimator, FIDDLE (Factor Informed Double Deep Learning Estimator), estimates ATE based on the framework of augmented inverse propensity weighting AIPW with the FAST-NN-based response and propensity estimates. FIDDLE consistently estimates ATE even under model misspecification, and is flexible to also allow for low-dimensional covariates. Our method achieves semiparametric efficiency under a very flexible family of propensity and outcome models. We present extensive numerical studies on synthetic and real datasets to support our theoretical guarantees and establish the advantages of our methods over other traditional choices, especially when the data dimension is large.
(Joint work with Soham Jana, Sanjeev Kulkarni, and Qishuo Yin)
Bio: Jianqing Fan is Frederick L. Moore Professor of finance at Princeton University, where he directs labs in financial econometrics and statistics. He earned his PhD from UC Berkeley and previously held academic positions at UNC–Chapel Hill, UCLA, and the Chinese University of Hong Kong. A former president of both the Institute of Mathematical Statistics and the International Chinese Statistical Association, he has served as editor for leading journals, including Management Science, JASA, Annals of Statistics, Probability Theory and Related Fields, Journal of Business and Economics Statistics, and Journal of Econometrics. His finance work focuses on empirical asset pricing, option pricing, portfolio theory, risk assessment, high-frequency finance, text and complex data, and time series. His research spans statistics, machine learning, financial economics, and computational biology, with over 300 highly cited publications and four books. Recognized with numerous honors—including the COPSS Presidents’ Award, Guggenheim Fellowship, Guy Medal, Noether Distinguished Scholar Award, and Le Cam Award and Lecture, and Wald Memorial Award and Lecture —he is a fellow of multiple scientific societies and an elected member of Academia Sinica and the Royal Academy of Belgium. His current interests include high-dimensional statistics, data science, machine learning, AI, and finance.
Thursday, March 12, 3-4pm, KRAN G002
Panel Discussion: The Interplay Between Statistics and AI/ML
Moderator: Wei Sun, Associate Professor of Quantitative Methods
Panelists:
Jianqing Fan, Frederick L. Moore '18 Professor of Finance, Professor of Statistics and Machine Learning, Professor of ORFE, Princeton University
Harsha Honnappa, Associate Professor of Industrial Engineering
Guang Lin, Associate Dean for Research & Innovation, Professor of Mechanical Engineering, Professor of Mathematics
Xiao Wang, Department Head, J.O. Berger and M.E. Bock Professor of Statistics
March 27, 1:30-3pm, RAWLS 2058
Can Li (Purdue ChemE)
Enforcing Constraints in Deep Learning and Self-Supervised Tuning of Optimization Models
Abstract: Machine learning models are increasingly used to support decision making in complex engineering and infrastructure systems. However, standard learning architectures often ignore the structural properties that govern these systems, such as physical constraints, logical rules, or optimization principles. This can lead to predictions or decisions that are infeasible, unreliable, or difficult to interpret in safety-critical applications.
This talk presents recent work at the intersection of machine learning and optimization that integrates structural knowledge directly into learning models. The first part introduces a model-agnostic framework for enforcing hard constraints on neural network outputs. The proposed architecture combines a task network trained for predictive accuracy with a safe network constructed using decision rules from stochastic and robust optimization. The final prediction is obtained through a convex combination of the two subnetworks, guaranteeing satisfaction of input-dependent linear equality and inequality constraints across the entire input space without requiring iterative projection or runtime optimization. Extensions that incorporate both linear and logical constraints into deep learning models will also be discussed.
The second part focuses on improving the reliability of data-driven optimization proxies. While machine learning models can rapidly approximate optimization solutions, they often fail to capture system constraints and underlying physics. To address this limitation, we introduce a self-supervised learning framework for tuning low-fidelity optimization models using differentiable optimization layers. The framework integrates a learning model that predicts tuning parameters, a differentiable optimization layer that maps these parameters to decisions, and a high-fidelity model that evaluates solution quality. Applied to the AC Optimal Power Flow problem using the DC-OPF model as a low-fidelity approximation, the approach achieves improved data efficiency, faster training, and higher solution accuracy compared to conventional supervised learning methods.
Bio: Can obtained his bachelor’s degree from Tsinghua University, China, in Chemical Engineering. He completed his PhD in Chemical Engineering at Carnegie Mellon University. His PhD research is focused on stochastic mixed-integer nonlinear programming and long-term expansion planning of power systems. Can did a one-year Postdoc at Polytechnique Montreal on using machine learning techniques to accelerate optimization algorithms. He joined the Davidson School of Chemical Engineering at Purdue University as an assistant professor in the Fall of 2022. His research group is focused on optimization, machine learning, and applications in sustainable energy systems. His group won Air Liquide’s global scientific challenge on data sharing for decarbonization in 2023, the Amazon Research Award in 2024, and the NSF CAREER Award in 2025.
April 3, 1:30-3pm, RAWLS 2058
Zhaosong Lu (University of Minnesota ISyE)
First-Order Methods for Bilevel Optimization
Abstract: Bilevel optimization, also known as two-level optimization, is an important branch within mathematical optimization. It has found applications across various domains, including economics, logistics, supply chain, transportation, engineering design, and machine learning. In this talk, we will present first-order methods for solving a class of bilevel optimization problems using either single or sequential minimax optimization schemes. We will discuss the first-order oracle complexity of these methods and present preliminary numerical results to illustrate their performance. This is joint work with Sanyou Mei (HKUST).
Bio: Zhaosong Lu is a Full Professor in the Department of Industrial and Systems Engineering at the University of Minnesota. He received his Ph.D. in Operations Research from Georgia Institute of Technology. His research focuses on the theory and algorithms of continuous optimization, with applications in data science and machine learning. Dr. Lu has published extensively in leading journals such as Mathematical Programming, Mathematics of Operations Research, and SIAM Journal on Optimization. His work has been supported by funding agencies including AFOSR, NSF, and ONR. He has served on several prize committees, such as the INFORMS George Nicholson Prize and the ICCOPT Best Paper Award. In addition, he has served as an Associate Editor for journals including Mathematics of Operations Research, SIAM Journal on Optimization, Computational Optimization and Applications, and Journal of Global Optimization.
April 17, 1:30-3pm, RAWLS 2058
Hongtu Zhu (UNC Biostatistics)
Deep Distributional Learning with Non-crossing Quantile Network
Abstract: In this paper, we introduce a non-crossing quantile (NQ) network for conditional distribution learning. By leveraging non-negative activation functions, the NQ network ensures that the learned distributions remain monotonic, effectively addressing the issue of quantile crossing. Furthermore, the NQ network-based deep distributional learning framework is highly adaptable, applicable to a wide range of applications, from classical non-parametric quantile regression to more advanced tasks such as causal effect estimation and distributional reinforcement learning (RL). We also develop a comprehensive theoretical foundation for the deep NQ estimator and its application to distributional RL, providing an in-depth analysis that demonstrates its effectiveness across these domains. Our experimental results further highlight the robustness and versatility of the NQ network.
Bio: Dr. Hongtu Zhu is the Kenan Distinguished Professor of Biostatistics, Statistics, Radiology, Computer Science and Genetics at the University of North Carolina at Chapel Hill. He was a DiDi Fellow and Chief Scientist of Statistics at DiDi Chuxing between 2018 and 2020 and held the Endowed Bao-Shan Jing Professorship in Diagnostic Imaging at MD Anderson Cancer Center between 2016 and 2018. He is an internationally recognized expert in statistical learning, medical image analysis, precision medicine, biostatistics, artificial intelligence, and big data analytics. He received an established investigator award from the Cancer Prevention Research Institute of Texas in 2016, the INFORMS Daniel H. Wagner Prize for Excellence in Operations Research Practice in 2019, the ICSA 2025 Distinguished Achievement Award, the IMS 2027 Medallion award and Lecture, and the COPSS 2025 Snedecor Award. He has published more than 360 papers in top journals, including Nature, Science, Cell, Nature Genetics, Nature Communication, PNAS, AOS, JASA, Biometrika, and JRSSB, as well as presenting 64+ conference papers at top conferences, including meetings for Neurips, ICLR, ICML, AAAI, and KDD. He is the coordinating editor of JASA and the editor of JASA ACS.
April 24, 1:30-3pm, RAWLS 2077
Han Liu (Northwestern University CS)
TBD
May 1, 1:30-3pm, RAWLS 2058
David Shmoys (Cornell ORIE)
TBD