Neural Architecture Search and Beyond for Representation Learning

[Date] | Seattle, Washington, in conjunction with CVPR 2020

Representation learning is the central pillar of computer vision and AI. Deep neural networks for representation learning has achieved considerable successes in recent years, but this success often relies on human experts, who manually design network architectures, set their hyperparameters, and develop new learning algorithms. As the complexity of many vision tasks is often beyond non-experts, the rapid growth of computer vision applications has created a demand for progressive automation of the neural network design process aiming to make effective methods available to everyone. Neural Architecture Search (NAS) has been successfully used to automate the design of deep neural network architectures, achieving results that outperform hand-design models in many computer vision tasks. While these recent works are opening up new paths forward, our understanding on why these specific architectures work well, how similar the architectures derived from different search strategies, how to design the search space, how to search the space in an efficient way and how to fairly evaluate different auto-designed architectures remains far from complete. The goal of this workshop is to bring together emerging research in the areas of automatic architecture search, optimization, hyperparameter optimization, data augmentation representation learning and computer vision to discuss open challenges and opportunities ahead.

We invite submissions on any aspect of NAS and beyond for Representation Learning in Computer Vision. This includes, but is not limited to:

  • Theoretical frameworks and novel objective functions for representation learning
  • Novel network architectures and training protocols
  • Adaptive multi-task and transfer learning
  • Multi-objective optimization and parameter estimation methods
  • Reproducibility in neural architecture search
  • Resource constrained architecture search
  • Automatic data augmentation and hyperparameter optimization
  • Unsupervised learning, domain transfer and life-long learning
  • Computer vision datasets and benchmarks for neural architecture search
Please contact Rameswar Panda and/or Tianfu Wu if you have any questions.