GPU Inference

On-Device Augmented Reality with Mobile GPUs

Juhyun Lee, Nikolay Chirkov, Ekaterina Ignasheva, Yury Pisarchyk, Mogan Shieh, Fabio Riccardi, Raman Sarokin, Andrei Kulik and Matthias Grundmann

Abstract

Reliable computer vision is a prerequisite for augmented reality (AR) applications; in particular when applied to mobile devices. Many state-of-the-art vision techniques employ deep neural networks. However, inference is a compute-intensive task and solely using mobile CPU can be difficult due to limited computing power, thermal constraints, and energy consumption. App developers and researchers have begun exploiting hardware accelerators to overcome these challenges. Recently, device manufacturers are adding neural processing units into high-end phones for on-device inference, but these account for only a small fraction of hand-held devices. In this paper, we present how we leverage the mobile GPU, a ubiquitous hardware accelerator on virtually every phone, to achieve real-time AR effects for both Android and iOS devices.

Download

Our state-of-the-art mobile GPU inference engine is integrated into the opensource project TensorFlow Lite and publicly available at https://tensorflow.org/lite.