Bio
Simge Küçükyavuz is Chair and David A. and Karen Richards Sachs Professor in the Industrial Engineering and Management Sciences Department at Northwestern University. She is an expert in mixed-integer, large-scale, and stochastic optimization. Her methodologies have applications in complex computational problems across numerous domains, including social networks, computing and energy infrastructure, statistical learning, and logistics. Her research has been supported by multiple grants from the National Science Foundation (NSF) and the Office of Naval Research (ONR). She is an INFORMS Fellow, and the recipient of the NSF CAREER Award and the INFORMS Computing Society (ICS) Prize. She is the past chair of ICS and serves on the editorial boards of Mathematics of Operations Research, Mathematical Programming, Operations Research, SIAM Journal on Optimization, and MOS-SIAM Optimization Book Series. She received her Ph.D. in Industrial Engineering and Operations Research from the University of California, Berkeley.
Talk: Mixed-Integer Convex Optimization for Statistical Learning
Abstract
Many statistical learning problems involving sparsity and other structural constraints can be formulated as mixed-integer convex optimization problems involving indicator variables and constraints on these indicators. As motivation, the first part of this talk will focus on the problem of learning directed acyclic graphs (DAGs) from continuous observational data, also known as the causal discovery problem. Current state-of-the-art structure learning methods for this problem face significant limitations: (i) they lack optimality guarantees and often yield suboptimal solutions; (ii) they rely on the restrictive assumption of homoscedastic noise. To address these shortcomings, we propose a computationally efficient mixed-integer programming framework. Numerical experiments demonstrate that our method outperforms existing algorithms and is robust to noise heteroscedasticity.
In the second part of the talk, we address the challenge of weak continuous relaxations inherent in natural mixed-integer convex formulations for such structured learning problems, which hinder the performance of branch-and-bound-based solvers. We develop novel methods to strengthen these formulations by deriving a convex hull description of the associated mixed-integer set in an extended space. Notably, this approach reduces the convexification of these problems to the characterization of a polyhedral set in the extended formulation. Our new theoretical framework unifies several previously established results and provides a foundation for applying polyhedral methods to strengthen the formulations of such mixed-integer nonlinear sets.
Bio
Sanjeeb Dash is a member of the Mathematics and Theoretical Computer Science department of IBM Research and leads the Foundations of Optimization and Probability group located at IBM's T. J. Watson Research Center. He works on both theoretical and practical aspects of Discrete Optimization. The focus of his research is Integer Programming and Linear Programming including applications of these areas to problems in AI and Statistics. He has co-authored the QSopt and QSopt_ex linear programming solvers and is an Area Editor of the Mathematical Programming Computation journal.
Talk: Applications of Polynomial Optimization to Scientific Discovery and New Solution Techniques
Abstract
Many data-driven machine learning techniques are nowadays used in scientific discovery applications including symbolic regression. In symbolic regression, the functional form is not fixed a priori and one instead learns a function of bounded expression length that can be composed of an input list of operators and variables.
However, finding formulas that are consistent with known theory is a challenging task. We describe our recent software implementation, named AI Hilbert, that draws on techniques from polynomial optimization and algebraic geometry to obtain a scientific formula when both the desired formula and background theory are expressible via polynomial expressions. We also discuss recent work on improving polynomial optimization techniques that are motivated by discrete optimization methods.
Bio
Axel Parmentier has been a researcher at Ecole des Ponts since 2016, where he founded and holds the chair of artificial intelligence for air transport with Air France. His research is at the intersection of operational research and machine learning. He recently received the Robert Faure award, which is given every three years by the French society of operational research (ROADEF) to a researcher under 35 for their contributions to the field.
Talk: Recent Trends in Combinatorial Optimization Augmented Machine Learning
Abstract
Combinatorial optimization augmented machine learning (COAML) is a novel and rapidly growing field that integrates methods from machine learning and operations research to tackle data-driven problems that involve both uncertainty and combinatorics. These problems arise frequently in industrial processes, where firms seek to leverage large and noisy data sets to better optimize their operations. COAML typically involves embedding combinatorial optimization layers into neural networks and training them with decision-aware learning techniques. This talk provides an overview of the field, covering its main applications, algorithms, and theoretical foundations. We also demonstrate the effectiveness of COAML on contextual and dynamic stochastic optimization problems, as evidenced by its winning performance on the 2022 EURO-NeurIPS challenge on dynamic vehicle routing.