Neural Architecture Search and Beyond for Representation Learning

June 19, 2020 | Seattle, Washington, in conjunction with CVPR 2020

When: June 19, 2020, Where: Please Join the Zoom Meeting

Representation learning is the central pillar of computer vision and AI. Deep neural networks for representation learning has achieved considerable successes in recent years, but this success often relies on human experts, who manually design network architectures, set their hyperparameters, and develop new learning algorithms. On the one hand, as the complexity of many vision tasks is often beyond non-experts, the rapid growth of computer vision applications has created a demand for progressive automation of the neural network design process aiming to make effective methods available to everyone. On the other hand, towards the next wave of trustworthy AI, representation learning may entail going beyond DNNs towards learning or learning-to-learn interpretable, universal and parsimonious representations for pattern analysis and synthesis in both big and small data regime.

  • Neural Architecture Search (NAS) has been successfully used to automate the design of deep neural network architectures, achieving results that outperform hand-design models in many computer vision tasks. While these recent works are opening up new paths forward, our understanding on why these specific architectures work well, how similar the architectures derived from different search strategies, how to design the search space, how to search the space in an efficient way and how to fairly evaluate different auto-designed architectures remains far from complete. One goal of this workshop is to bring together emerging research in the areas of automatic architecture search, optimization, hyperparameter optimization, data augmentation, representation learning and computer vision to discuss open challenges and opportunities ahead.
  • Although remarkable progress has been achieved, state-of-the-art DNNs are still plagued with several critical issues: domain and platform specific, black boxes, vulnerable to adversarial attacks, catastrophic forgetting in continual learning, lack of commonsense knowledge integration and annotation-hungry to name a few. The other goal of this workshop is to foster interdisciplinary communication of researchers working on representation learning (e.g., computer vision, machine learning, natural language processing, healthcare and robot autonomy, etc.) to simulate more attention of, and diverse discussions from, the broader community. Broader discussions will also benefit the next generation of computer vision and AI researchers and practitioners.

We invite submissions on any aspect of NAS and beyond for Representation Learning in Vision and Beyond. This includes, but is not limited to:

  • Theoretical frameworks and novel objective functions for representation learning
  • Novel network architectures and training protocols
  • Adaptive multi-task and transfer learning
  • Multi-objective optimization and parameter estimation methods
  • Reproducibility in neural architecture search
  • Resource constrained architecture search
  • Automatic data augmentation and hyperparameter optimization
  • Unsupervised learning, domain transfer and life-long learning
  • Computer vision datasets and benchmarks for neural architecture search
Please contact Rameswar Panda and/or Tianfu Wu if you have any questions.