Benchmarking@CEC-2021 Workshop

Good Benchmarking Practices for Evolutionary Computation

Monday, June 28, 2021 16:15-18:15 Krakow/Berlin time
-- online event --

A platform to come together and to discuss recent progress and challenges in the area of benchmarking optimization heuristics.

This workshop will continue our workshop series that we started in 2020 (BENCHMARK@GECCO-2020 with >75 participants and Benchmarking@PPSN2021 with >90 participants). The core theme is on benchmarking evolutionary computation methods and related sampling-based optimization heuristics.

The workshop is a joint activity of the Benchmarking Network and the IEEE CIS Task Force on Benchmarking.

Schedule -- Overview

  • Welcome & Opening by the Workshop Organizers (5')

  • Invited Talk 1:
    Using Benchmarked EAs for Real-world Applications in Medicine - Expectation vs Reality
    Speaker: Peter A.N. Bosman, Centrum Wiskunde & Informatica (CWI) and Delft University of Technology

      • Abstract: In the development and analysis of Evolutionary Algorithms (EAs), it is very common to use of a variety of well-known benchmark functions to gauge the performance of an EA and draw conclusions about its competence. Admirable efforts on the side of benchmark design aside, questions about the size of the simulation-to-reality-gap always remain. In this talk I will share some of my own experiences as an EA researcher who has been active on both sides of the gap, predominantly for real-valued EAs on the one side and medical applications on the other side. Whereas for some problems the gap turned out to be almost non-existent, leading to uptake in a medical center in Amsterdam for clinical use, for other problems the gap turned out to be much bigger. However, this story has multiple faces. In particular, in EAs, most benchmark problems are of the black-box kind where, when solving these problems, nothing must be assumed to be known a priori. In practice, this is hardly ever the case, which begs the question: is it time to paint our black box benchmarks grey?

  • Invited Talk 2:
    Ontologies for open computer science
    Speaker: Sašo Džeroski, Jožef Stefan Institute, Ljubljana

    • Invited Talk 3:
      Metaphor-based metaheuristics: the good and the bad
      Speaker: Christian Camacho, Université Libre de Bruxelles (ULB)

      • Abstract: A metaheuristic is a high-level procedure designed to find, generate, or select a heuristic that may efficiently provide a good solution. In the history of metaheuristics, taking inspiration from natural phenomena has played an important role. In fact, metaphor-based metaheuristics, such as evolutionary algorithms and ant colony optimization are paradigmatic examples of such an approach. Yet, in the last 10 to 20 years, we have been witnessing a trend that consists in taking inspiration from all kinds of natural (and sometimes supernatural) phenomena to propose hundreds of so-called “novel” metaphor-based algorithms. Although one could get the impression that this is a sign of a very active community full of new ideas, this trend is being increasingly recognized as unhealthy and detrimental to the field. Some of the aspects that characterize these “novel” algorithms are the use of useless metaphors, lack of novelty, and poor experimental validation and comparison. In this talk, I will discuss in detail these aspects and present some of the attempts that members of the metaheuristics community have made so far to stop this highly undesirable trend.

        Based on joint work with Thomas Stützle and Marco Dorigo.


Benchmarking plays a vital role for understanding performance and search behavior of sampling-based optimization techniques such as evolutionary algorithms. Even though benchmarking is a highly-researched topic within the evolutionary computation community, there are still a number of open questions and challenges that await to be explored:

  1. most commonly-used benchmarks are too small and cover only a part of the problem space,

  2. benchmarks lack the complexity of real-world problems, making it difficult to transfer the learned knowledge to work in practice,

  3. proper statistical analysis techniques need to be developed allowing for an easy application depending on the nature of the data,

  4. the culture of benchmarking, reporting on experiments, and sharing resources to ensure reproducibility needs to be improved. This helps to avoid common pitfalls in benchmarking optimization techniques. As such, we need to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorihtms and fully demonstrate where they excel and where they can be improved.

  5. user-friendly, openly accessible benchmarking software would help to ensure the above

Workshop Aims

Having the above challenges in mind, improvements in the following directions are contemplated:

  • Performance measures for comparing algorithms behavior;

  • Novel statistical approaches for analyzing empirical data

  • Selection of meaningful benchmark problems

  • Landscape analysis

  • Data mining approaches for understanding algorithm behavior

  • Transfer learning from benchmark experiences to real-world problems

  • Benchmarking tools for executing experiments and analysis of experimental results

  • Suggestions for a good benchmarking practice and in practice


  • Introduction to the workshop

  • Discussion, each discussion is initiated by a short presentation of an invited speaker

    • What are the goals of benchmarking and how should a sound benchmarking study be conducted?

    • How does our community do in terms of sound benchmarking?

    • Can we benefit from OntoDM to share evolutionary data results?

    • What can each one of us contribute to establish benchmarking best practices in evolutionary computation?

  • Presentation of the Benchmarking Network, the IEEE CIS Task Force on Benchmarking, and other ongoing initiatives

  • Breakout Sessions

  • Closing


Hosting Event

Our workshop will be part of the IEEE Congress on Evolutionary Computation (CEC 2021), which (as of now) is planned to take place as on onsite event in Kraków, Poland!

Related Events

At CEC 2021, there will be two Special Sessions, whose focus is strongly related to the workshop's scope:

If you are interested in our workshop's scope, please consider submitting your work to those sessions.

Moreover, a similar benchmarking best practices workshop will be held at GECCO 2021, which takes place online from July 10 - 14, 2021:

This workshop is organized as part of ImAppNIO Cost Action 15140.