Javier Antoran, James Urquhart Allingham and José Miguel Hernández-Lobato
One-shot neural architecture search allows joint learning of weights and network architecture, reducing computational cost. We limit our search space to the depth of residual networks and formulate an analytically tractable variational objective that allows for obtaining an unbiased approximate posterior over depths in one-shot. We propose a heuristic to prune our networks based on this distribution. We compare our proposed method against manual search over network depths on the MNIST, Fashion-MNIST, SVHN datasets. We find that pruned networks do not incur a loss in predictive performance, obtaining accuracies competitive with unpruned networks. Marginalising over depth allows us to obtain better-calibrated test-time uncertainty estimates than regular networks, in a single forward pass.
10:30 - 11:45 Virtual Poster Session (all papers)
James A. Preiss, Eugen Hotaj and Hanna Mazzawi
We explore the design decisions involved in reducing neural network architecture search (NAS) to a reinforcement learning (RL) problem. We compare several reductions on the NAS-Bench-101 dataset, while holding the RL algorithm and search space constant. Based on our findings, we discuss how NAS differs from typical RL settings, and suggest guidelines for applying RL to NAS problems.
14:15 - 15:00 Panel Discussion
Please add questions for the panel here.