Ambros Gleixner
On correctness of numerical solvers in mixed-integer optimization
Axel Parmentier
Recent trends in Combinatorial Optimization Augmented Machine Learning
Yingqian Zhang
Learning to Solve Combinatorial Optimization from Data and Language
On correctness of numerical solvers in mixed-integer optimization
Over the last decades, mixed-integer programming solvers have developed into highly successful and versatile tools for tackling a wide range of challenging optimization problems in operations research and beyond.
As a result, a large and growing body of computational research in academia and industry today relies on the performance and results of black-box MIP solvers. However, ensuring the correctness of solver implementations and their results becomes a challenging task not only due to the interplay of many algorithmic components of a branch-and-cut solver, but also due to the ubiquitous use of limited-precision floating-point arithmetic. In this talk we will discuss recent advances in this regard and highlight three contributions that try to improve the correctness of MIP solvers in different aspects: delta-debugging as a tool to more easily pinpoint systematic implementation errors, the use of hybrid-precision arithmetic for building numerically exact linear and mixed-integer solvers, and the construction and verification of solver-independent branch-and-cut certificates.
Recent trends in Combinatorial Optimization Augmented Machine Learning
Combinatorial Optimization Augmented Machine Learning (COAML) is a rapidly growing field that integrates machine learning and operations research methods to solve data-driven problems involving both uncertainty and combinatorial structures. These problems frequently arise in industrial settings where organizations leverage large, noisy datasets to optimize operations. COAML embeds combinatorial optimization layers into neural networks and trains them using decision-aware learning techniques. It excels in contextual and dynamic stochastic optimization problems, as demonstrated by its winning performance in the 2022 EURO-NeurIPS Dynamic Vehicle Routing Challenge. This talk will introduce the main applications of the domain, and then cover three recent contributions: a primal-dual empirical cost minimization algorithm, a structured reinforcement learning extension, and new regularizations that exploit connections between local search and Monte Carlo methods. These algorithms enhance performance, reduce computational costs, and lower data requirements, enabling new large-scale contextual stochastic optimization applications. Additionally, they provide convergence guarantees that support new statistical learning generalization bounds.
Yingqian Zhang
Eindhoven University of Technology, Netherlands
Learning to Solve Combinatorial Optimization from Data and Language
Recent developments in machine learning are reshaping how we approach combinatorial optimization problems (COPs). For example, deep reinforcement learning (DRL) has emerged as a promising paradigm for tackling COPs, leading to a fast-growing research area at the intersection of AI and OR. Most existing DRL approaches are online RL, where an agent learns by interacting with a surrogate environment. However, these methods typically require a large number of interactions to learn good policies. This is inefficient, especially given that many well-established algorithms already exist to generate high-quality solutions for combinatorial optimization problems.
In this talk, I will discuss how previously collected solution data can be leveraged to learn optimization policies using offline RL. Surprisingly, we find that policies trained on purely random solution data can outperform those trained on higher-quality solutions. This counterintuitive result raises interesting questions about how the structure and diversity of training data influence policy learning. I will then discuss the emerging role of LLMs in combinatorial optimization, and particularly, introduce the Natural Language Combinatorial Optimization benchmark, which evaluates the ability of LLMs to perform end-to-end problem solving for COPs from natural language problem descriptions.