Machine Learning Assisted Evolutionary Computation for Vehicle Routing Problems

This website maintains resources for the competition on Machine Learning for Evolutionary Computation, for solving Vehicle Routing Problems  (VRPs). The website has been updated for the 2024 round of the competition. 

Competition Overview

This competition aims to serve as a vehicle to bring together the latest developments of machine learning-assisted evolutionary computation for vehicle routing problems (VRPs). Results of current relevant research contain a lot of rich knowledge in evolutionary computation, which is however often discarded or not further investigated. These include different features of the problem/solutions to inform or drive the evolution/optimisation, different settings/operators/heuristics in effective evolutionary algorithms, and findings/evaluations of the search/fitness space. These can all be collected and processed as data, serving an excellent new problem domain for the machine learning community to enhance evolutionary computation.

VRP variants of different difficulties provide an ideal testbed to enable performance comparison of machine learning-assisted computational optimisation. Fostering, reusing, and interpreting the rich knowledge building ML4VRP remains a challenge for researchers across disciplines, however, is highly rewarding to further advance human-designed evolutionary computation.

In this competition, there are two tracks: CVRP (the most basic model) and CVRPTW (VRP with Capacity and Time Window constraints). Participants must develop machine learning model(s) which can design and enhance evolutionary computational algorithms or meta-heuristics for solving VRPs.

Participants must submit descriptions of the developed algorithms and the produced solutions for the corresponding CVRP/CVRPTW instances. Submissions will be evaluated on randomly selected instances from the benchmark CVRP/CVRPTW instances with an evaluator. The most widely adapted evaluation function, i.e. to minimise the number of vehicles and total travel distance, is used to determine the best machine learning assisted evolutionary algorithms for solving VRPs. The algorithms which produced the best average fitness for solving VRPs will receive the highest score.

Problem Data and Solution Evaluator

VRP Instances

The competition will adopt the convention already used in the recent competition [6], i.e., considering balancing the dual objectives of minimising the number of vehicles (NV) and minimising the total travel distance (TD). The objective function is defined as below, where c is set to 1000 empirically [7]:

 Objective Function =  c x NV + TD

The problem instances provided in the competition are taken from widely used benchmark data sets, available to download from the GitHub repository. The provided problem instances cover different instance types and sizes.  All the VRP instances can also be found in CVRPLIB.

CVRP

The X dataset [4] is one of the most widely studied CVRP benchmark data sets (listed under Uchoa et al. (2014) in CVRPLIB). This data set covers different instance features, such as depot positioning, customer positioning, demand distribution etc, allowing a comprehensive assessment of algorithm performance. 

The problem instances provided in the competition are the instances in the X dataset with customers ranging from 100 to 400, covering different instance types. The competition will evaluate the submitted solution results using a subset of the provided instances (unknown to the participants before the results are presented).

CVRPTW

The Solomon dataset [5] and the Homberger and Gehring dataset [6] are widely studied CVRPTW benchmark data sets. Both data sets consist of six types of instances, i.e., C1, C2, R1, R2, RC1, RC2, which differ with respect to the customers’ geographical locations, vehicle capacity, density and tightness of the time windows.

The problem instances provided in the competition are taken from two sources, i.e.,

The provided problem instances are randomly selected from these three sized problem instances, covering different instance types. The competition will evaluate the submitted solution results using a subset of the provided instances (unknown to the participants before the results are presented).

Important: Instance and Solution Formats

In this competition, we follow the convention used by the recent DIMACS VRP challenge [9] for instance format and solution format. Specifically, solutions should be represented in the CVRPLIB format

We use the example solution below to explain the specifics of the instance and solution formats for the CVRP track and the CVRPTW track.

Route #1: 3 1 2

Route #2: 6 5 4

CVRP track

CVRP instances are given in the TSPLIB95 format [10], i.e., locations are numbered from 1 to n. Particularly, the depot is always in location 1, and customers are numbered from node 2 to node n. 

It is worth noting that due to historical reasons, the CVRPLIB solution format uses a convention that is slightly different from TSPLIB95, i.e., customers are numbered from 1 to n-1. Therefore, the given solution corresponds to routes 1-> 4 ->2 -> 3 -> 1 and 1 ->7 -> 6 -> 5 -> 1 in TSPLIB95 numbering.

Further explanations about the solution format can be found in the DIMACS CVRP challenge documentation

CVRPTW

CVRPTW instances are given in the widely accepted standard format for this specific variant. In CVRPTW instances, nodes are numbered from 0 to n. Node 0 is the depot, and customers are from node 1 to node n. 

The example solution corresponds to routes  0-> 3 ->1 -> 2 -> 0 and 0 ->6 -> 5 -> 4 -> 0. 

Submission Instructions

Participants in the ML4VRP Competition should submit the following by 13 June 2024 to Rong.Qu@nottingham.ac.uk

This competition does not require the source code of the machine learning-assisted algorithm.

Please ensure:

Participants could also submit a two-page abstract by 8 April 2024, to be included in the GECCO proceedings if accepted. Please refer to the Information for authors of "2-page Competition Abstracts" at https://gecco-2024.sigevo.org/Paper-Submission-Instructions.

Participants are also invited to submit a full paper to a special issue on ML4VRP in a journal. Details will be made available on the competition website as soon as the dates are agreed upon. We also encourage participants to attend GECCO 2024.

Scoring Scheme

For each track, the competition will evaluate the submitted solutions for a subset of the provided VRP instances, which will remain unknown to the participants until the results are released. To determine the winner and compare the performance of the competing machine learning-assisted algorithms, we will adopt the scoring scheme used in the CHeSC competition [7], which is based on Formula 1.

Formula 1 adopted a scoring scheme before 2010 as follows: In each race, the top eight drivers were awarded points as follows: 10, 8, 6, 5, 4, 3, 2, and 1. The points earned by each driver in all the races are added up, and the driver with the most points is declared the winner. This is adapted for this competition as follows. 

Assume that there are a total of m instances and n competing algorithms. For each instance, an ordinal value x is given representing the rank of the algorithm compared to the others (1 ≤ x ≤ n). The top eight ranking algorithms for each instance will receive 10, 8, 6, 5, 4, 3, 2 and 1 point(s), respectively (as in the Formula 1), while the remaining algorithms will receive no points for that instance.

The points will be added across the m instances for each algorithm. The winner will be the algorithm with the highest total points. Therefore, if there are, for example, five instances in the evaluation, the maximum possible score is 50 points.

To break the ties where two or more algorithms achieve the same objective function value (with a precision of 3 decimal places) on a given instance, the points awarded to the corresponding ranking positions are added together and then distributed equally among the tied algorithms. This ensures that the total number of points awarded for each instance remains the same, and no algorithm is unfairly advantaged or disadvantaged with a tie.

The winner of the competition is the algorithm with the most points. In the case where two or more algorithms are awarded the same total points, the algorithm with more wins (the number of times it is ranked the first among all competing algorithms) is ranked first. If there is still a tie, the winner will go to the algorithm which is ranked the most second places, and so on.

Important Dates


Organisers

Roundup 2023

The focus of the 2023 round of the competition is on solving VRP with Time Window constraints (VRPTW). Participants must develop machine learning model(s) which can design and enhance evolutionary computational algorithms or meta-heuristics for solving VRPTWs.

Participants must submit descriptions of the developed algorithms and the produced solutions for the corresponding VRPTW instances. Submissions were evaluated on randomly selected instances from the benchmark VRPTW instances with an evaluator. The algorithms which produced the best average fitness for solving VRPTWs received the highest score. The resources of the 2023 round of the competition are in the GitHub repository

We are thrilled to announce the winners of the ML4VRP2023 competition. Thank you to everyone who participated and made this competition a success!

References