08:00 - 08:10 am Opening remarks
Presenter: Prof. Yingyan (Celine) Lin
08:10 - 08:55 am Introduction and background knowledge
Presenter: Yongan Zhang
Deep Neural Network (DNN) workloads
DNN basics
Why are DNNs prospective?
Why do we need efficient models and hardware accelerators?
Recent progress in efficient DNN models and accelerators
Challenges in developing efficient DNN models and accelerators
A promising solution → Co-search and thus this tutorial
08:55 - 09:45 am Auto-NBA: Co-search algorithm and network design space
Presenter: Yonggan Fu
Background:
What is the role of hardware search space and modeling?
Motivation:
Why is it crucial?
Efficacy:
What does it enable and what are the use cases?
Technical Details
Examples+Guidance:
Toy example illustration
09:45 - 10:00 am Auto-NBA: Hardware design space and performance modeling
Presenter: Yongan Zhang
Role of hardware subpart in Auto-NBA
Use cases beyond Auto-NBA
Detailed methodology
10:00 - 10:20 am HW-NAS-Bench (1)
Presenter: Chaojian Li
Challenges in Co-Search When Target More (Commercial) Devices
Highlighted Features of Our Proposed HW-NAS-Bench
Analysis offered in HW-NAS-Bench
Real Use Cases Reported by Other Researchers
10:20 - 11:00 am Q&A + Coffee break
11:00 - 11:25 am HW-NAS-Bench (2)
Presenter: Chaojian Li
Hands-on Demonstration
11:25 - 11:55 am DNN-Chip Predictor
Presenter: Yang Zhao
Background:
DNN-Chip Predictor's Role in Co-Search
Overview:
What You Can Do with DNN-Chip Predictor
Technical Details:
DNN-Chip Predictor's Analytical Models
Evaluation:
Evaluating DNN-Chip Predictor
Guidance:
How to Use DNN-Chip Predictor
11:55 - 12:00 pm Q&A