Benchmarking@CEC-2021 Tutorial

Benchmarking and Experimentation: Pitfalls and Best Practices

-- online --

June 28 - July 1, 2021 (the exact date and time will be posted once they are known)

by Thomas Bartz-Beielstein, Boris Naujoks, Mike Preuss

Experimentation is arguably the most important method for the tremendous advances in computer science and artificial intelligence at the moment, and this is also true for evolutionary computation. Whereas in other sciences, experimentation is well structured and interacts closely with theory, this interaction is much weaker in optimization, and more general, in AI research. In a first wave of improvements, the authors had helped to set up the currently applied experimental methodology around 15 years ago. Nowadays, many of the questions around structured experimentation are revived due to the current AI hype, where partly the same problems are tackled again. However, there are also new problems that need to be handled, namely the ones of replicability and reproducibility. At the same time, the amount of available benchmarking environments and competitions has increased enormously.


In our tutorial, we bring together all the questions and problems around benchmarking and experimentation and provide an overview over the current state-of-the-art. Our intention is to provide practical guidelines and hints in order to circumvent a lot of pitfalls that can occur when working experimentally, especially with benchmarks.


Summary

  • Hosting event: CEC 2021, Tutorials

  • Date: June 28 - July 1, 2021 (exact date and time will be posted once they are known)

  • Tutorial duration: planned for 100min, would be easy to extend if there is demand

  • Tutorial level: introductory


  • More information to be provided as soon as available.

  • Contact: Boris Naujoks



Supported by the benchmarking network: