2nd Workshop on Neural Architecture Search @ICLR2021 (NAS 2021)

May 7th 2021 (moved from May 8th by ICLR, since the entire conference moved one day earlier)

Collocated with the 9th International Conference on Learning Representations (ICLR 2021)

As the main conference, this workshop will be fully virtual and live streamed from the ICLR virtual platform. For details please see the Practical Information and the FAQ.

To participate in the workshop:


Neural Architecture Search (NAS) is an exciting new field of study that is taking representation learning to the next level by allowing us to learn the architectures in a data-driven way that then enables efficient learning of representations. While representation learning removed the need of manual feature engineering, it shifted the manual task to the manual selection of architectures; as a natural next step, NAS replaces this manual architecture selection step, allowing us true end-to-end learning of the architecture, the features, and the final classifier using the features expressed as instantiations of the architecture.

Since the first workshop on NAS at ICLR 2020, there have been many new developments in NAS. Firstly, there has been a large increase in standardized tabular benchmarks and more researchers releasing source code, leading to more rigorous empirical NAS research and also allowing research groups without access to industry-scale compute resources to run thorough experimental evaluations. Secondly, there are now several works aiming for standardized and modularized open-source libraries that allow for both clean evaluations of different approaches without confounding factors and for mixing and matching components of different NAS methods. Finally, by now there are also several applications of NAS beyond its original narrow focus on object recognition, to fields like semantic segmentation, speech recognition, and natural language processing.

The goal of this workshop is to build a strong, open, inclusive, and welcoming community of colleagues (and ultimately friends) behind this research topic, with collaborating researchers that share insights, code, data, benchmarks, training pipelines, etc, and together aim to advance the science behind NAS. We encourage all submissions to release code and follow the best practices laid out in the NAS best practices checklist.

Keynote Speakers

  • Jeff Clune (Open AI)

  • Fabio Carlucci (Facebook / research work done at Huawei)

  • Margret Keuper (University of Mannheim)

  • Xuanyi Dong (University of Technology Sydney)


Note that due to the change of format to a virtual workshop, we will upload the video recordings of every invited talk (35 min) and contributed talk (15 min) before the actual workshop day to the Accepted Papers page. We plan to upload the videos by April 19th, but there might also be delays due to the current situation. Additionally, this year we will have a short poster spotlight session before the long poster session, where the each presenter provides a high-level overview of their paper.


Aaron Klein, Arber Zela, Frank Hutter, Liam Li, Jan Hendrik Metzen, Jovita Lukasik