ECCV 2022 Tutorial on
New Frontiers in
Efficient Neural Architecture Search
Monday, October 24 (AM), 2022
Remote, Tel Aviv time
Overview
Neural Architecture Search (NAS) has become increasingly important for many computer vision systems to automate the design of neural network architectures. However, due to the exponential size of the search space, many classical NAS algorithms require hundreds of GPU days which is not practical for standard users. Recently, significant progress has been made to improve the efficiency of NAS and make running NAS possible even for regular users on standard GPU machines. This tutorial aims to introduce and summarize recent progress in efficient NAS which enables the application of NAS to a diverse set of tasks.
Schedule
Part I: Introduction and history (9:00 - 9:20, Cho-Jui Hsieh)
We will start with a brief introduction to NAS and go over its history.
Part II: Differentiable NAS (9:20 - 10:40, Cho-Jui Hsieh)
This part introduces the family of differentiable NAS algorithms, which speeds
up NAS by continuous relaxations and weight sharing. Starting with the seminal
work of DARTS, we will introduce recent improvements in supernet training,
architecture selection, and scalability.
Part III: Predictor-based NAS with Graph Neural Networks (10:40 - 11:10, Ruochen Wang)
This part introduces the family of predictor-based NAS algorithms, which
iterative fine-tune the surrogate performance predictor and use it to sample
unexplored configurations. We will cover the recent progress including
Graph Neural Network (GNN) based predictors, improved Bayesian sampling,
and efficiency improvements by successive halving.
Part IV: Zero-shot NAS (11:10 - 11:30, Ruochen Wang)
This part introduces a new class of NAS algorithms proposed recently which
estimate the performance of architectures based on some zero-shot properties,
such as some graph-based features for neural networks, expressiveness
of networks, and the condition number of the Neural Tangent Kernel (NTK).
Part V: Real use cases in various tasks (11:30 - 11:50, Ruochen Wang)
We will introduce several applications of NAS in real-world computer vision
problems, such as a transformer or CNN-based classification, detection, and
semantic clustering. We also present how NAS inspires the development
of other AutoML tasks such as Automated Optimizer Search.
Part VI: Conclusions and future directions (11:50 - 12:00, Ruochen Wang)
This part will conclude the current progress of NAS and point out important
future directions for researchers to work on.