Good Benchmarking Practices for Evolutionary Computation
Saturday, September 5, 16:00-17:30 Leiden time
hybrid format: online and onsite (in Leiden, NL)
open to all: no registration required for online participation
Link available on request, please get in contact: firstname.lastname@example.org
Benchmarking@PPSN'20: a platform to come together and to discuss recent progress and challenges in the area of benchmarking iterative optimization heuristics.
Saturday, September 5, 16:00-17:30 Leiden time
Link for online participation: available on demand via email@example.com
Open Access: registration at PPSN is not required for online participation in the workshop
(a note, however, that this applies only to the workshops. A registration is required for other PPSN activities)
The link to the workshop will be posted here before the workshop. You can also follow us on twitter to stay informed: twitter.com/benchmark_net
Introduction (Pascal Kerschke, Slides)
Late to the Party: Reproducible Research in Evolutionary Computation (Manuel López-Ibáñez, Slides)
Empirical reproducibility is one of the foundations of the scientific method. Partly an empirical science, Computer science has seen increasing efforts to achieve greater reproducibility. Although there is a growing consensus that reproducibility is also a concern in EC, our field is relatively behind other subfields in Computer Science in working towards that goal due to various cultural and technical obstacles. In this talk, we introduce various concepts of reproducibility and how they map to the context of Evolutionary Computation (EC). We discuss obstacles to reproducibility and possible guidelines and solutions.
Performance Measurement (Jakob Bossek, Slides)
Most classical performance measures, e.g. Expected Running Time (ERT) or Penalized Average Runtime (PAR), aggregate multiple atomic performance measures (probability of success where success is measured binary, i.e., a certain quality is reached or not, and runtime until success) into a single measure. This talk aims to start a discussion on multi-objective performance measurement and alternatives like anytime behavior of randomized search heuristics.
Assessing and improving the state of the art in solving challenging problems (Holger H. Hoos, Slides)
For basically all widely studied, computationally challenging problems, the state of the art is not defined by a single algorithm with fixed parameter settings. Instead, different types of problem instances are best solved using different parameter settings or entire different algorithms. This raises the question how to assess the state of the art in solving such problems, how to fairly assess individual algorithms, and how to incentivise those working on algorithms for these kinds of problems to most efficiently improve the true state of the art. My presentation will give an overview of how to approach and tackle these issues.
Ongoing benchmarking initiatives and how to get involved (Carola Doerr, Slides)
We will discuss several initiatives that aim at improving benchmarking practices and will explain how to get involved. See https://docs.google.com/document/d/1wFm5Ol3bGwQVv54QjAxdohWeUSJjuw50E9Joy515pMs/edit?usp=sharing for an overview and relevant links.
The 16th International Conference on Parallel Problem Solving from Nature (PPSN XVI) , organized as a hybrid (online and onsite) conference.
The onsite event will be held in Leiden, The Netherlands, September 5-9, 2020.
For colleagues affiliated with an institution in the following countries, ITC conference grants are available through COST action CA15140: Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO): https://imappnio.dcs.aber.ac.uk/stsms
The grants can cover full travel cost to attend PPSN or the registration fees for online participation.
Eligible countries: Albania, Bosnia-Herzegovina, Bulgaria, Cyprus, Czech Republic, Estonia, Croatia, Hungary, Lithuania, Latvia, Luxembourg, Malta, Moldova, Montenegro, Poland, Portugal, Romania, Slovenia, Slovakia, Republic of North Macedonia, Republic of Serbia and Turkey.
In case of questions, please contact the main organizers:
Full List of Organizers
Thomas Bäck (Leiden University, The Netherlands)
Thomas Bartz-Beielstein (TH Cologne, Germany)
Jakob Bossek (The University of Adelaide, Adelaide, Australia)
Bilel Derbel (University of Lille, Lille, France)
Carola Doerr (CNRS researcher at Sorbonne University, Paris, France)
Tome Eftimov (Jožef Stefan Institute, Ljubljana, Slovenia)
Pascal Kerschke (University of Münster, Germany)
William La Cava (University of Pennsylvania, USA)
Arnaud Liefooghe (University of Lille, France)
Manuel López-Ibáñez (University of Manchester, UK)
Katherine Malan (University of South Africa)
Boris Naujoks (TH Cologne, Germany)
Pietro S. Oliveto (University of Sheffield, UK)
Patryk Orzechowski (University of Pennsylvania, USA)
Mike Preuss (Leiden University, The Netherlands)
Jérémy Rapin (Facebook AI Research, Paris, France)
Ofer M. Shir (Tel-Hai College and Migal Institute, Israel)
Olivier Teytaud (Facebook AI Research, Paris, France)
Heike Trautmann (University of Münster, Germany)
Ryan J. Urbanowicz (University of Pennsylvania, USA)
Vanessa Volz (modl.ai, Copenhagen, Denmark)
Markus Wagner (The University of Adelaide, Australia)
Hao Wang (LIACS, Leiden University, The Netherlands)
Thomas Weise (Institute of Applied Optimization, Hefei University, Hefei, China)
Borys Wróbel (Adam Mickiewicz University, Poland)
Aleš Zamuda (University of Maribor, Slovenia)