Machine Learning for
Combinatorial Optimization
Tutorial at IJCAI 2020 - January 6-7, 2021
Organized by the Canada Excellence Research Chair in Data Science for Real-Time Decision-Making and the University of Toronto
Brief Overview
This tutorial will provide an overview of the recent impact machine learning is having on combinatorial optimization, particularly under the Mixed Integer Programming (MIP) framework. Topics covered will include ML and reinforcement learning for predicting feasible solutions, improving exact solvers with ML, and Ecole, a software framework for learning in the exact MIP solver SCIP.
Bengio, Lodi, Prouvost (2018): Different ways in which ML can intervene in a Combinatorial Optimization algorithm
Description
Combinatorial Optimization (CO) is a cornerstone of Computer Science, Artificial Intelligence (AI) and Operations Research. It has been widely successful in industrial applications ranging from airline crew planning to sports scheduling and kidney exchange. While CO used to underlie most AI research through the satisfiability problem (SAT), modern AI research has moved towards more probabilistic approaches, and the connection between the two fields has weakened. In the last five to ten years however, there has been a strong revival of interest in improving combinatorial optimization using machine-learning methods.
This tutorial aims to introduce the audience to this exciting growing field. We believe that the audience will benefit greatly from the proposed tutorial, as it will lay out the landscape of this research space, the merits of different ML techniques in the CO setting, and the various CO tasks that have benefited from the use of ML. We will also introduce a new open-source library, Ecole, aimed at facilitating access to newcomers in the field. While the tutorial will mostly focus on Mixed Integer Programming as a concrete mathematical framework for CO, we will also touch on the relationship between MIP and other constraint reasoning frameworks such as Satisfiability (SAT) and Constraint Satisfaction (CSP), as most of the ideas that will be presented do carry over to these frameworks.
Why this tutorial?
The aim of this tutorial is to connect two central topics in AI: machine learning and combinatorial optimization. Given that a substantial part of the IJCAI audience may be working in either one of the two topics, we believe that the tutorial will attract considerable interest.
Additionally, there has been a rapid growth in the literature on the topic, which is now mature enough to have review articles. As such, we hope to provide a tutorial for newcomers that will hopefully lead to further advances.
Target Audience
The tutorial targets both junior and senior researchers in two prominent areas of interest to the IJCAI community:
Machine learning researchers looking for a challenging application domain, namely combinatorial optimization;
Optimization practitioners and researchers who may benefit from learning about recent advances in ML methods for improving combinatorial optimization algorithms.
Basic prerequisites of the tutorial include:
Combinatorial optimization: basic understanding of optimization modeling, algorithm design, computational complexity.
Machine learning: basic knowledge of paradigms such as supervised and reinforcement learning; common techniques such as neural networks.
Detailed Outline
Introduction to combinatorial optimization.
Modeling decision-making problems with Mixed Integer Programming (MIP);
Complexity and solution approaches (exact and heuristic);
Real-world applications: planning, scheduling and routing for transportation, healthcare, etc.
Data-driven algorithm design
The pure ML approach: predicting feasible solutions.
Reinforcement learning for combinatorial optimization;
Neural network architectures for representing graph problems;
Limitations: lack of guarantees.
The hybrid approach: improving exact solvers with ML.
The branch-and-bound framework for mixed integer linear programs;
Learning solver search policies: a Markov decision process (MDP) perspective;
Overview of tasks of interest: node selection, heuristic selection, cut selection, etc.;
Learning solver-based heuristics: diving and large neighborhood search;
Open challenges for ML.
Machine learning for Branching.
The Branching problem;
Learning to branch with graph neural networks;
Hybrid models for learning to branch.
Ecole: A python framework for learning in exact MIP solvers.
A streamlined interface for doing ML in the open-source MIP solver SCIP, based on OpenAI Gym;
Example: "learning to branch'' using Ecole;
Easily extending predefined environments for your own research;
Performance evaluation and analysis.
Concluding remarks and new frontiers.
Business applications;
Recap of various contributions in this area;
Evaluation and Challenges going forward.