Efficient Deep Learning for Computer Vision
CVPR 2019
CVPR 2019
Long Beach, CA
June 16th - June 20th
Workshop overview
Computer Vision has a long history of academic research, and recent advances in deep learning have provided significant improvements in the ability to understand visual content. As a result of these research advances on problems such as object classification, object detection, and image segmentation, there has been a rapid increase in the adoption of Computer Vision in industry; however, mainstream Computer Vision research has given little consideration to speed or computation time, and even less to constraints such as power/energy, memory footprint and model size. Nevertheless, addressing all of these metrics is essential if advances in Computer Vision are going to be widely available on mobile devices. The morning session of the workshop goal is to create a venue for a consideration of this new generation of problems that arise as Computer Vision meets mobile system constraints. In the afternoon session, we will make sure we have a good balance between software, hardware and network optimizations with an emphasis on training of efficient neural networks with high performance computing architectures.
Particular topics that will be covered.
- Mobile applications. Novel mobile applications using Computer Vision such as image processing (e.g. style transfer, body tracking, face tracking) and augmented reality.
- Neural Net Architecture search for mobile devices. Small and efficient Neural Net architectures are essential to meet the constraints of many mobile devices.
- Neural Network model compression. Compressed networks allow to store models efficiently on mobile devices and save bandwidth while transferring models.
- Quantized Neural Networks. Running low bit networks saves memory and increases inference speed on mobile devices.
- Optimizations and Tradeoffs for Computation time, Accuracy, Memory usage, as motivated by mobile devices.
- Mobile Computer Vision on CPU vs GPU vs DSP. Investigations into the processor architectures that best support mobile applications.
- Hardware accelerators to support Computer Vision on mobile platforms.
Panel discussion
- Panel Title: Hardware Accelerators for Embedded Computer Vision.
- Abstract: We’ve seen great advances in the accuracy of inference for a number of computer vision tasks; however, these advances have come with a significant increase in computing cost. While other advances in the design of mobile DNNs have reduced this cost significantly, there remains a significant gap between the need for mobile computing power and current processor platforms. In the last few years most of the major semiconductor companies and over 30 startups have launched DNN accelerator efforts to fill this gap. In this panel we will discuss the state of the art in hardware accelerators for embedded computer vision.
- Panel Moderator: Chris Rowen
- Panelists: