Efficient Deep Learning for Computer Vision

CVPR 2018

CVPR 2018

SALT LAKE CITY, UTAH

June 18th - June 22nd

Workshop overview

Computer Vision has a long history of academic research, and recent advances in deep learning have provided significant improvements in the ability to understand visual content. As a result of these research achievements on problems such as object classification, object detection, and image segmentation, there has been a rapid increase in the adoption of Computer Vision in industry; however, mainstream Computer Vision research has given little consideration to speed or computation time, and even less to constraints such as power/energy, memory footprint and model size. Nevertheless, addressing all of these metrics is essential if advances in Computer Vision are going to be widely available on mobile devices. The morning session of the workshop goal is to create a venue for a consideration of this new generation of problems that arise as Computer Vision meets mobile system constraints. In the afternoon session, we will make sure we have a good balance between software, hardware and network optimizations with an emphasis on training of efficient neural networks with high performance computing architectures.

Particular topics that will be covered.

    • Mobile applications. Novel mobile applications using Computer Vision such as image processing (e.g. style transfer, body tracking, face tracking) and augmented reality.
    • Neural Net Architecture search for mobile devices. Small and efficient Neural Net architectures are essential to meet the constraints of many mobile devices.
    • Neural Network model compression. Compressed networks allow to store models efficiently on mobile devices and save bandwidth while transferring models.
    • Quantized Neural Networks. Running low bit networks saves memory and increases inference speed on mobile devices.
    • Optimizations and Tradeoffs for Computation time, Accuracy, Memory usage, as motivated by mobile devices.
    • Mobile Computer Vision on CPU vs GPU vs DSP. Investigations into the processor architectures that best support mobile applications.
    • Hardware accelerators to support Computer Vision on mobile platforms.
    • Efficient training to accelerating deep learning in the cloud.
      • Techniques for optimizing network design to scale training and sub-real time classification
      • Cover diverse sets of hardware solution - e.g. TensorFlow Processing Units, GPU interconnect architectures that move the needle to 100+TeraFlops of compute

Panel discussion

    • Panel Title: High-Performance Training for Deep Learning and Computer Vision.
    • Abstract: We’ve seen great advances in deep learning solutions and technology over the past few years - leading us to an amazing impactful journey in machine learning and computer vision. However, we are still at the brink of more disruption, especially with data collection growing with mobile cameras and self-driving vehicles. Systems are evolving to process Petabytes of data, processing millions of training imagery, and growing need for adaptive, real-time processing. Advancement in GPU technology stack and interconnect optimization, chip level ML designs, combined with algorithmic techniques for scaling deep learning and computer vision are pushing the envelope even further. This panel will discuss opportunities and challenges in this critical area.
    • Panel Moderator: Dr. Ramesh Sarukkai
    • Panelists:
      • Michael James, Cerebras Chief Architect
      • Dave Driggers, CEO Cirrascale
      • John Barrus, Sr. Product Lead TensorFlow/TPU, Google
      • Dhabaleswar Panda, Professor at Ohio State University