First ICLR Workshop on Neural Architecture Search (NAS 2020)
Submission deadline extended to February 7 (the day after the ICML deadline). But please do not try to pack 8 pages of ICML paper into 4 pages for this workshop; choose wisely and keep the main 4-page paper self-contained and clear, using the appendix for additional material.
Please add questions for the panel here.
Neural Architecture Search (NAS) is the logical next step in automating the learning of representations. It follows upon the recent transition from manual feature engineering to automatically learning features (using a fixed neural architecture) by replacing manual architecture engineering with automated architecture design. NAS can be seen as a subfield of automated machine learning (AutoML) and has significant overlap with hyperparameter optimization and meta-learning. NAS methods have already outperformed manually-designed architectures on several tasks, such as image classification, object detection or semantic segmentation. They have also already found architectures that yield a better trade-off between resource consumption on target hardware and predictive performance.
The goal of this workshop is to bring together researchers from industry and academia that focus on NAS. NAS is an extremely hot topic of large commercial interest, and as such has a bit of a history of closed source and competition. It is therefore our goal to build a strong, open, inclusive, and welcoming community of colleagues (and ultimately friends) behind this research topic, with collaborating researchers that share insights, code, data, benchmarks, training pipelines, etc, and together aim to advance the science behind NAS. We encourage all submissions to release code and follow the best practices laid out in the NAS best practices checklist.
- Isabelle Guyon (Chalearn & University Paris Sud)
- Song Han (MIT)
- Quoc V. Le (Google Brain)
- Ameet Talwalkar (CMU & Determined AI)
Frank Hutter, Aaron Klein, Liam Li, Jan Hendrik Metzen, Nikhil Naik and Arber Zela