Yu-Hsin Chen

Co-design the Stack for On-Device AI in AR

Abstract: Augmented Reality (AR) will fundamentally change the way we interact with our environment. By enhancing the real world with a set of computer-generated perceptual information across multiple sensory modalities, it can provide people with the perceptual super power and personalized context-aware assistance through an easily-accessible and socially acceptable form factor, such as eyeglasses. Majority of the critical algorithms in AR, such as in the domain of computer vision, are powered by AI for the best user experience. However, these algorithms require high compute and storage resources, and have to be delivered under tight physical constraints in order to make the device lightweight and provide all-day battery life. This necessitates the need to co-design and co-optimize the entire stack, from AI algorithms to the hardware platforms. In this talk, we will present our early work to demonstrate the benefits and potential of such co-design approaches, including the opportunities and challenges of leveraging near-sensor ML compute in AR systems, and discuss open research areas for the community to further explore.

Yu-Hsin Chen is a research scientist at Facebook focusing on hardware/software co-design to enable on-device AI for AR/VR systems. He received the M. S. and Ph.D. degrees in EECS from MIT, Cambridge, MA, in 2013 and 2018, respectively. He received the 2019 ACM SIGARCH/IEEE-CS TCCA Outstanding Dissertation Award and the 2018 Jin-Au Kong Outstanding Doctoral Thesis Prize in Electrical Engineering at MIT. His work on the dataflows for CNN accelerators was selected as one of the Top Picks in Computer Architecture in 2016. He co-taught a tutorial on “Hardware Architectures for Deep Neural Networks” at MICRO-49, ISCA2017, MICRO-50, and ISCA2019. He has served on the TPC for MLSys and the ERC for HPCA, MICRO, and ASPLOS.