Keynote Speakers

Linköping University, Sweden

University of Southern California, USA

KU Leuven, Belgium

Decomposition to tackle large-scale discrete optimisation problems


Elina Rönnberg, Linköping University

 

Abstract: Most practically relevant discrete optimisation problems are NP-hard. When such problems are also complex and large-scale, there is a great risk that state-of-the-art optimisation solvers will fail to solve them. In some cases, the use of decomposition-based methods can make a significant difference and allow you to solve challenging problems within a reasonable time frame.  Decomposition means making a reformulation of a problem into a set of simpler problems that are typically solved by an iterative scheme to produce a solution to the original problem. The purpose is to distribute the computational burden of the original problem onto the simpler ones in a way that pays off with respect to total solution time. Critical for obtaining this is to exploit problem structure in the decomposition and to design an efficient solution scheme for the simpler problems. In this overview talk, I will address Dantzig-Wolfe decomposition and logic-based Benders decomposition and give some examples of method development and the impact of applying decomposition for some practical applications.

 

Bio: Elina Rönnberg is a Senior Associate Professor in Optimisation at the Department of Mathematics, Linköping University, where she leads the Mathematics and algorithms for intelligent decision-making group. She has experience from working in both industry and academia. Her research is in discrete optimisation, with a special interest in decomposition methods and applications in scheduling and resource allocation. The applied projects are often carried out in collaboration with industry. Current application areas include air traffic management, electric vehicle routing, and the design of electronic systems in aircraft. Contributions to decomposition methods include Dantzig-Wolfe decomposition, Lagrangian relaxation, column generation for integer programs, and logic-based Benders decomposition.

Optimization and machine learning on quantum computers: yes/no/maybe?

Giacomo Nannicini, University of Southern California 

Abstract: We would like to solve difficult optimization problems -- perhaps to allocate resources, perhaps to train a machine learning model -- and we would like to do it quickly. Is it reasonable to hope that quantum computers can help us do that? In this talk we discuss the current state of quantum optimization and quantum machine learning research, for problems that are classically defined: the problems are described on a classical (i.e., non-quantum) computer, and the solution that we are looking for is also classical. We will see that for some problems quantum algorithms are promising, even if only as an alternative to classical algorithms with different tradeoffs, but in other cases there has been no convincing evidence of their usefulness so far.

Bio: Giacomo Nannicini is an associate professor in the Industrial & Systems Engineering department at the University of Southern California, which he joined in 2022. Prior to that, he was a research staff member in the quantum algorithms group at the IBM T. J. Watson Research Center, and an assistant professor in the Engineering Systems and Design pillar at the Singapore University of Technology and Design. His main research interest is optimization and its applications. Giacomo received several awards, including the 2021 Beale--Orchard-Hays prize, the 2015 Robert Faure prize, and the 2012 Glover-Klingman prize.

Decision-Focused Learning:
Foundations, State of the Art, Benchmarks and Future Opportunities


Tias Guns, KU Leuven

 

Abstract: Increasingly, combinatorial optimisation problems follow a predict + optimize paradigm, where part of the parameters (costs, volumes, capacities) are predicted from data, and those predictions are fed into a downstream combinatorial optimisation problem. How to best train these predictive models? Decision-focused learning (DFL) is an emerging paradigm in machine learning which trains a model to optimize decisions, integrating prediction and optimization in an end-to-end system. This paradigm holds the promise to revolutionize decision-making in many real-world applications which operate under uncertainty, where the estimation of unknown parameters within these decision models often becomes a substantial roadblock to high-quality solutions. This talk presents a comprehensive review of DFL. It provides an in-depth analysis of the various techniques devised to integrate machine learning and optimization models, introduces a taxonomy of DFL methods distinguished by their unique characteristics, and conducts an extensive empirical evaluation of these methods proposing suitable benchmark dataset and tasks for DFL. Finally, we'll provide valuable insights into current and potential future avenues in DFL research.

 

Bio: Tias Guns is Associate Professor at the DTAI lab of KU Leuven, in Belgium. His research is at the intersection of machine learning and combinatorial optimisation. Tias' expertise is in the hybridisation of machine learning systems with constraint solving systems, more specifically building constraint solving systems that reason both on explicit knowledge as well as knowledge learned from data. For example, learning the preferences of planners in vehicle routing, and solving new routing problems taking both operational constraints and learned human preferences into account; or building energy price predictors specifically for energy-aware scheduling, and planning maintenance crews based on expected failures. He was awarded a prestigious ERC Consolidator grant in 2021 to work on conversational human-aware technology for optimisation.