Schedule

08:30 AM Welcome and Introduction

08:40 AM Hardware Efficiency Aware Neural Architecture Search and Compression - Song Han (Invited talk)

09:10 AM Structured matrices for efficient deep learning - Sanjiv Kumar (Invited talk)

09:40 AM DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression (Talk)

10:00 AM Poster spotlight presentations (Talk)

10:30 AM Coffee Break AM (Break)

11:00 AM Understanding the Challenges of Algorithm and Hardware Co-design for Deep Neural Networks - Vivienne Sze (Invited talk)

11:30 AM Dream Distillation: A Data-Independent Model Compression Framework (Talk)

11:50 AM The State of Sparsity in Deep Neural Networks (Talk)

12:10 PM Lunch break (Break)

12:40 PM Poster session

02:00 PM DNN Training and Inference with Hyper-Scaled Precision - Kailash Gopalakrishnan (Invited talk)

02:30 PM Mixed Precision Training & Inference - Jonathan Dekhtiar (Invited talk)

03:00 PM Coffee Break PM (Break)

03:30 PM Learning Compact Neural Networks Using Ordinary Differential Equations as Activation Functions (Talk)

03:50 PM Triplet Distillation for Deep Face Recognition (Talk)

04:10 PM Single-Path NAS: Device-Aware Efficient ConvNet Design (Talk)

04:30 PM Panel discussion

05:30 PM Wrap-up and Closing