Dynamic Neural Networks Meet Computer Vision

June 19, 2022 (13:30-17:30 CDT) | New Orleans, Louisiana, In conjunction with CVPR 2022

Despite recent success of large-scale neural networks, most existing deep methods rely on one-size-fits-all models, where the exact same fixed set of features are extracted for all inputs or configurations, no matter their complexity. In contrast, humans dynamically allocate time and scrutiny for perception tasks - for example, a single glimpse is sufficient to recognize most objects and scenes, whereas more time and attention is required to clearly understand occluded or complicated ones. Dynamic Neural Networks, a new type of deep neural networks, allow selective execution, i.e., given an input or environmental setting, only a subset of neurons are executed instead of the same computation as done in traditional static networks. Dynamic networks has significantly outperformed static neural networks in many computer vision tasks, such as image classification, object detection, semantic segmentation and video classification. Besides classical vision tasks, the application areas of dynamic neural networks are strategic not only to academic research including visual reasoning and learning but also to business, e.g. , acceleration and deployment of deep neural network models in many resource-constrained environments. While these recent works are opening up new paths forward, our understanding on why these dynamic networks work well, what is the accuracy-efficiency trade-off of these dynamic networks, how to learn dynamic activation functions, how to design hardware accelerators and architectures for training these dynamic networks and how to fairly compare dynamic networks with static networks in different applications remains far from complete. The goal of this workshop is to bring together emerging research in the areas of dynamic deep neural networks, optimization, predictive control, dynamic neuro-symbolic reasoning and computer vision to discuss open challenges and opportunities ahead. Main research topics of relevance to this workshop include, but not limited to:

- representation learning over dynamic graphs, dynamic neuro-symbolic visual reasoning,

- design of hardware accelerators for dynamic networks, system support to deploy dynamic neural networks in cloud and edge devices,

- reinforcement learning for dynamic tasks, multi-objective optimization, model-based control,

- dynamic multi-task and transfer learning, dynamic modular networks, mixture-of-experts networks,

- dynamic network selection and configuration, instance-aware deep neural network design,

- conditional computation for faster inference, dynamic prediction and planning, adaptive computation time,

- dynamic networks for energy-efficient computing, application of dynamic neural networks into new areas,

- vision datasets and benchmarks for dynamic neural networks, etc.


Contact: For any questions or feedback, please contact Rameswar Panda at rpanda@ibm.com or Abir Das at abir@cse.iitkgp.ac.in.