This website maintains resources for the competition on Machine Learning for Evolutionary Computation, for solving Vehicle Routing Problems (VRPs). The website has been updated for the 2026 round of the competition.
Please join the Discord for any questions and discussions.
This competition aims to bring together the latest advances of machine learning-assisted evolutionary algorithms for solving vehicle routing problems (VRP). Current relevant research has collected a large amount of data on designing evolutionary algorithms, which captures rich knowledge in evolutionary computation. However, this data is often discarded or not further investigated in the literature. This includes solutions of different features to inform or drive the evolution/optimisation, data on evolutionary algorithms of different settings and different operators/heuristics, and data on the search space or fitness evaluation. This provides an excellent new problem domain for the machine learning community to enhance evolutionary computation.
Variants of VRP provide an ideal testbed to enable performance comparison of machine learning-assisted evolutionary computation. Fostering, reusing, and benchmarking the rich knowledge building ML4VRP remains a challenge for researchers across disciplines, however, it is highly rewarding to further advances in human-designed evolutionary computation.
In this competition, there are two tracks: CVRP (the most basic model) and CVRPTW (VRP with Capacity and Time Window constraints). Participants must develop machine learning model(s) which can design and enhance evolutionary computational algorithms or meta-heuristics for solving VRPs.
Participants must submit descriptions of the developed algorithms and the produced solutions for the corresponding CVRP/CVRPTW instances. Submissions will be evaluated on randomly selected instances from the benchmark CVRP/CVRPTW instances with an evaluator. The most widely adapted evaluation function, i.e. to minimise the number of vehicles and total travel distance, is used to determine the best machine learning assisted evolutionary algorithms for solving VRPs. The algorithms which produced the best average fitness for solving VRPs will receive the highest score.
Building on the success of prior ML4VRP competitions and recognising the rapid advances in machine learning, including graph neural networks, reinforcement learning, and large language models (LLMs), the 2026 edition of the competition welcomes approaches that leverage these cutting-edge techniques for evolutionary algorithm design.
Participants are encouraged to explore a wide range of methods, from neural combinatorial optimisation and learned heuristics to novel LLM-based approaches, such as automated code generation and prompt-guided search, to advance the state of the art in solving vehicle routing problems.
Dataset of the VRP instances on the GitHub page for testing the algorithms (GitHub link)
Solution evaluator with instructions to evaluate the solutions (GitHub link)
The competition will adopt the convention already used in the recent competition [6], i.e., considering balancing the dual objectives of minimising the number of vehicles (NV) and minimising the total travel distance (TD). The objective function is defined as below, where c is set to 1000 empirically [7]:
Objective Function = c x NV + TD
The problem instances provided in the competition are taken from widely used benchmark data sets, available to download from the GitHub repository. The provided problem instances cover different instance types and sizes. All the VRP instances can also be found in CVRPLIB.
CVRP
The X dataset [4] is one of the most widely studied CVRP benchmark data sets (listed under Uchoa et al. (2014) in CVRPLIB). This data set covers different instance features, such as depot positioning, customer positioning, demand distribution etc, allowing a comprehensive assessment of algorithm performance.
The problem instances provided in the competition are the instances in the X dataset with customers ranging from 100 to 400, covering different instance types. The competition will evaluate the submitted solution results using a subset of the provided instances (unknown to the participants before the results are presented).
CVRPTW
The Solomon dataset [5] and the Homberger and Gehring dataset [6] are widely studied CVRPTW benchmark data sets. Both data sets consist of six types of instances, i.e., C1, C2, R1, R2, RC1, RC2, which differ with respect to the customers’ geographical locations, vehicle capacity, density and tightness of the time windows.
The problem instances provided in the competition are taken from two sources, i.e.,
Solomon [5] dataset of 100 customer problems,
Homberger and Gehring [6] datasets of 200 customer problems and 400 customer problems.
The provided problem instances are randomly selected from these three-sized problem instances, covering different instance types. The competition will evaluate the submitted solution results using a subset of the provided instances (unknown to the participants before the results are presented).
In this competition, we follow the convention used by the recent DIMACS VRP challenge [9] for instance format and solution format. Specifically, solutions should be represented in the CVRPLIB format.
We use the example solution below to explain the specifics of the instance and solution formats for the CVRP track and the CVRPTW track.
Route #1: 3 1 2
Route #2: 6 5 4
CVRP Track
CVRP instances are given in the TSPLIB95 format [10], i.e., locations are numbered from 1 to n. Particularly, the depot is always in location 1, and customers are numbered from node 2 to node n.
It is worth noting that due to historical reasons, the CVRPLIB solution format uses a convention that is slightly different from TSPLIB95, i.e., customers are numbered from 1 to n-1. Therefore, the given solution corresponds to routes 1-> 4 ->2 -> 3 -> 1 and 1 ->7 -> 6 -> 5 -> 1 in TSPLIB95 numbering.
Further explanations about the solution format can be found in the DIMACS CVRP challenge documentation.
CVRPTW Track
CVRPTW instances are given in the widely accepted standard format for this specific variant. In CVRPTW instances, nodes are numbered from 0 to n. Node 0 is the depot, and customers are from node 1 to node n.
The example solution corresponds to routes 0-> 3 ->1 -> 2 -> 0 and 0 ->6 -> 5 -> 4 -> 0.
Participants in the ML4VRP Competition should submit the following by 13 June 2026 to Rong.Qu@nottingham.ac.uk:
A short description of 1) the machine learning (e.g. supervised or unsupervised learning, reinforcement learning, deep learning, etc.) which design, assist and enhance evolutionary algorithms; 2) the resulting algorithms (e.g. meta-heuristics, evolutionary algorithms, etc.) supported by the machine learning for solving the CVRP/VRPTW.
This competition does not require the source code of the machine learning-assisted algorithm; however, open-source implementations are strongly encouraged, and high-quality, well-documented repositories may be highlighted as exemplary contributions.
The solutions in the required format and the corresponding CVRP/VRPTW instances, to be verified by the solution evaluator provided in the competition's GitHub repository.
Please ensure:
When submitting to Rong.Qu@nottingham.ac.uk, the email subject line starts with [ML4VRP-Submission] and includes the name of your algorithm.
The algorithm description is submitted in PDF and contains all relevant information, such as the name of your team, algorithm, team leader, primary affiliation and the track to participate.
The solution output for each instance should be in the required format. Provide the solution outputs for all provided VRP instances as a compressed zip file.
Participants could also submit a two-page abstract by 21 April 2026, to be included in the GECCO proceedings if accepted. Please refer to the Information for authors of "2-page Competition Entries" at https://gecco-2026.sigevo.org/Paper+Submission+Instructions.
Participants are also invited to submit a full paper to a special issue on ML4VRP in a journal. Details will be made available on the competition website as soon as the dates are agreed upon. We also encourage participants to attend GECCO 2026.
For each track, the competition will evaluate the submitted solutions for a subset of the provided VRP instances, which will remain unknown to the participants until the results are released. To determine the winner and compare the performance of the competing machine learning-assisted algorithms, we will adopt the scoring scheme used in the CHeSC competition [7], which is based on Formula 1.
Formula 1 adopted a scoring scheme before 2010 as follows: In each race, the top eight drivers were awarded points as follows: 10, 8, 6, 5, 4, 3, 2, and 1. The points earned by each driver in all the races are added up, and the driver with the most points is declared the winner. This is adapted for this competition as follows.
Assume that there are a total of m instances and n competing algorithms. For each instance, an ordinal value x is given representing the rank of the algorithm compared to the others (1 ≤ x ≤ n). The top eight ranking algorithms for each instance will receive 10, 8, 6, 5, 4, 3, 2 and 1 point(s), respectively (as in the Formula 1), while the remaining algorithms will receive no points for that instance.
The points will be added across the m instances for each algorithm. The winner will be the algorithm with the highest total points. Therefore, if there are, for example, five instances in the evaluation, the maximum possible score is 50 points.
To break the ties where two or more algorithms achieve the same objective function value (with a precision of 3 decimal places) on a given instance, the points awarded to the corresponding ranking positions are added together and then distributed equally among the tied algorithms. This ensures that the total number of points awarded for each instance remains the same, and no algorithm is unfairly advantaged or disadvantaged with a tie.
The winner of the competition is the algorithm with the most points. In the case where two or more algorithms are awarded the same total points, the algorithm with more wins (the number of times it is ranked the first among all competing algorithms) is ranked first. If there is still a tie, the winner will go to the algorithm which is ranked the most second places, and so on.
Two-page abstract submission: 21 April 2026
Description and solution submission: 13 June 2026
GECCO 2026 Conference: 13-17 July 2026
Rong Qu, Univesrity of Nottingham, UK
Weiyao Meng, University of Nottingham, UK
Isaac Triguero, University of Granada, Spain
Mustafa Misir, Duke Kunshan University, China
To be added: more bibliography relevant to the competition
N. Pillay, R. Qu, Hyper-heuristics: Theory and Applications, Springer, 2018. Book website.
N. Pillay, R. Qu (eds.) Automated Design of Machine Learning and Search Algorithms, Springer Natural Computing Series, July 2021.
Uchoa, E., Pecin, D., Pessoa, A., Poggi, M., Vidal, T., & Subramanian, A. (2017). New benchmark instances for the capacitated vehicle routing problem. European Journal of Operational Research, 257(3), 845-858.
M. M. Solomon, "Algorithms for the vehicle routing and scheduling problems with time window constraints," Operations Research, 35(2):254–265, 1987.
J. Homberger and H. Gehring, "Two evolutionary metaheuristics for the vehicle routing problem with time windows," INFOR: Information Systems and Operational Research, 37(3):297–318, 1999.
Burke, E.K., Gendreau, M., Hyde, M., Kendall, G., McCollum, B., Ochoa, G., Parkes, A.J. and Petrovic, S., 2011. The cross-domain heuristic search challenge–an international research competition. In Learning and Intelligent Optimization: 5th International Conference, LION 5, Rome, Italy, January 17-21, 2011. Selected Papers 5 (pp. 631-634). Springer Berlin Heidelberg.
Walker, J.D., Ochoa, G., Gendreau, M. and Burke, E.K., 2012. Vehicle routing and adaptive iterated local search within the hyflex hyper-heuristic framework. In Learning and Intelligent Optimization: 6th International Conference, LION 6, Paris, France, January 16-20, 2012, Revised Selected Papers (pp. 265-276). Springer Berlin Heidelberg.
12th DIMACS Implementation Challenge: CVRP track. [Description], [Competition website].
Reinelt, Gerhard. "Tsplib95." Interdisziplinäres Zentrum für Wissenschaftliches Rechnen (IWR), Heidelberg 338 (1995): 1-16.