ZERO-COST NAS CompetitioN @ Automl-conf-22


In their paper Zero-Cost Proxies for Lightweight NAS, Abdelfattah et al. proposed a series of zero-cost proxies to predict a model's score using just a minibatch of training data. These proxies match and even outperform some of the conventional proxies, despite using 3 orders of magnitude less computation. In this competition, we challenge the participants to create their own zero-cost proxy, which will be used to rank the models across different search spaces and tasks.

Go to the competition page.


Neural Architecture Search (Elsken et al., 2019) has attracted a lot of attention over the last couple of years. The quest to automate the discovery of optimal architectures, however, comes at a great price - computation. Most NAS techniques require significant amounts of GPU-hours to perform the search. To work around this restriction, "zero-cost proxies" (Abdelfattah et al., 2021) which use negligible computational resources have been proposed. These proxies often require a single forward/backward pass of a minibatch of data, and can be used to guide the search for an efficient architecture as done by TE-NAS (Chen et al., 2021). The proxies can range from something as simple as the number of parameters or FLOPS to computing the Neural Tangent Kernel (NTK) of an architecture (Jacot et al., 2018). Given their compute-cost and memory-cost efficiency, zero-cost proxies, if reliable enough, can be used to obtain a quick estimate of how well a neural network architecture will perform.


This is a code submission competition, which shall be hosted on CodaLab. Participants will be required to implement their zero-cost proxies using a lightweight version of NASLib (Ruchte et al., 2020), a library for Neural Architecture Search. NASLib provides users with a range of tabular and surrogate benchmarks, making it easy to sample a random architecture from a supported search space, instantiate it as a PyTorch model, and query its final performance instantly. Once a zero-cost proxy has been implemented, the framework allows users to evaluate its performance across several search spaces and tasks in a matter of minutes.

The challenge is as follows: Given N models from a search space, such as NASBench301 (Siems et al., 2020), the participant's zero-cost proxy will be used to score and rank the models for a given task, such as classification on CIFAR10 dataset. The Kendall-Tau rank correlation between the predicted and actual ranks of the models is the metric of interest. The final score of a submission shall be the average rank correlation across a set of NAS benchmarks (combinations of search spaces and datasets). To keep the spirit of "zero-cost" proxies in the user submissions, it is required that the scoring of models consumes only negligible computational resources. This is enforced by running the computations on CPUs instead of GPUs and setting a hard limit for the runtime of the program.

The CodaLab competition page is here.


The timeline is as follows:

  • Start of the competition: 11.04.2022

  • Team registration: possible up to a week before submission deadline

  • Submission deadline extended to: 08.07.2022 00:00 UTC

  • The results shall be announced at the AutoML-Conf 22.


First place

Team Codex: Lichuan Xiang [1], Łukasz Dudziak [2], Hongkai Wen [1, 2]
[1] University of Warwick

[2] Samsung AI Center, Cambridge

Second place

Team Warwick: Youyang Sha[1], Hongkai Wen [1, 2]

[1] University of Warwick

[2] Samsung AI Center, Cambridge

Third place

Team djcint: Daniel Cummings (Intel Labs)


In addition to certificates for the top 3 teams, we will provide all participants with certificates of participation. Moreover, submissions that provide significant scientific contributions to the zero-cost NAS field and NASLib (Ruchte et al., 2020) will be part of a potential paper on the results of the competition, and/or a future journal submission of NASlib. The corresponding authors of these submissions will be invited to be co-authors of these papers.


  • Arjun Krishnakumar (University of Freiburg)

  • Arber Zela (University of Freiburg)

  • Rhea Sukthanker (University of Freiburg)

  • Shakiba Moradian (University of Freiburg)

  • Debadeepta Dey (Microsoft Research)

  • Binxin Ru (University of Oxford)

  • Mahmoud Safari (University of Freiburg)

  • Frank Hutter (University of Freiburg & Bosch Center of Artificial Intelligence)