Recent advances in integrating

Machine Learning &
Combinatorial Optimization

Tutorial MH7 at AAAI-21

8:30 am – 11:45 am PST

February 3, 2021

Brief Overview

This tutorial will provide an overview of the recent impact machine learning is having on combinatorial optimization, particularly under the Mixed Integer Programming (MIP) framework. Topics covered will include ML and reinforcement learning for predicting feasible solutions, improving exact solvers with ML, a software framework for learning in exact MIP solvers, and the emerging paradigm of decision-focused learning.

Bengio, Lodi, Prouvost (2018): Different ways in which ML can intervene in a Combinatorial Optimization algorithm

Illustration of the Decision-Focused Learning framework

Description

Combinatorial Optimization (CO) is a cornerstone of Computer Science, Artificial Intelligence (AI) and Operations Research. It has been widely successful in industrial applications ranging from airline crew planning to sports scheduling and kidney exchange. While CO used to underlie most AI research through the satisfiability problem (SAT), modern AI research has moved towards more probabilistic approaches, and the connection between the two fields has weakened. In the last five to ten years however, there has been a strong revival of interest in improving combinatorial optimization using machine-learning methods.

This tutorial aims to introduce the audience to this exciting growing field. We believe that the audience will benefit greatly from the proposed tutorial, as it will lay out the landscape of this research space, the merits of different ML techniques in the CO setting, and the various CO tasks that have benefited from the use of ML. We will also introduce a new open-source library, Ecole, aimed at facilitating access to newcomers in the field. While the tutorial will mostly focus on Mixed Integer Programming as a concrete mathematical framework for CO, we will also touch on the relationship between MIP and other constraint reasoning frameworks such as Satisfiability (SAT) and Constraint Satisfaction (CSP), as most of the ideas that will be presented do carry over to these frameworks.

Why this tutorial?

The aim of this tutorial is to connect two central topics in AI: machine learning and combinatorial optimization. Given that a substantial part of the AAAI audience may be working in either one of the two topics, we believe that the tutorial will attract considerable interest.

Additionally, there has been a rapid growth in the literature on the topic, which is now mature enough to have review articles. As such, we hope to provide a tutorial for newcomers that will hopefully lead to further advances.

Target Audience

The tutorial targets both junior and senior researchers in two prominent areas of interest to the AAAI community:

  • Machine learning researchers looking for a challenging application domain, namely combinatorial optimization;

  • Optimization practitioners and researchers who may benefit from learning about recent advances in ML methods for improving combinatorial optimization algorithms.

Basic prerequisites of the tutorial include:

  • Combinatorial optimization: basic understanding of optimization modeling, algorithm design, computational complexity.

  • Machine learning: basic knowledge of paradigms such as supervised and reinforcement learning; common techniques such as neural networks.

Recording

Detailed Outline + slides

Part I by Elias B. Khalil: [slides]

  • Introduction to combinatorial optimization & Tutorial overview.

  • Modeling decision-making problems with Mixed Integer Programming (MIP);

  • Complexity and solution approaches (exact and heuristic);

  • Real-world applications;

  • Data-driven algorithm design.

Part 2 by Elias B. Khalil: [slides]

  • The pure ML approach: predicting feasible solutions.

  • Reinforcement learning for combinatorial optimization;

  • Neural network architectures for representing graph problems;

  • Limitations: lack of guarantees, scalability challenges.

Part 3 by Didier Chételat & Maxime Gasse: [slides]

  • The hybrid approach: improving exact solvers with ML.

  • The branch-and-bound framework for mixed-integer linear programs (MIP);

  • Standard approaches to solver engineering;

  • Learning solver search policies: a Markov decision process (MDP) perspective;

  • Overview of tasks of interest;

  • Open challenges for ML/RL.

Part 4 by Giulia Zarpellon & Laurent Charlin: [slides 1, 2]

  • Machine learning for MIP solving: challenges & literature.

      • Hands-on ML-for-MIP with a focus on the Branching problem;

      • Representations & Features;

      • Generalization notions;

      • Data & Metrics.

Part 5 by Antoine Prouvost: [slides]

  • Ecole: A python framework for learning in exact MIP solvers.

  • A streamlined interface for doing ML in the open-source MIP solver SCIP, based on OpenAI Gym;

  • Example: "learning to branch'' using Ecole;

  • Easily extending predefined environments for your own research;

  • Performance evaluation and analysis.

Part 6 by Bistra Dilkina: [slides]

  • Decision-focused Learning.

  • Integrating LP/MIP combinatorial downstream tasks end-to-end in learning;

  • Integrating graph optimization tasks end-to-end in learning.

Part 7 by Andrea Lodi: [slides]

  • Concluding remarks and new frontiers.

  • Business applications;

  • Recap of various contributions in this area;

  • Evaluation and Challenges going forward.

1_khalil.pdf
2_khalil.pdf
3_gasse_exact.pdf
4a_zarpellon_ml4mip.pdf
4b_charlin_DataMetric.pdf
5_prouvost_ecole.pdf
6_dilkina_aaai2021_TUTORIAL.pdf
7_Lodi-Conclusions.pdf

Presenters