Face Mesh

Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs

Yury Kartynnik, Artsiom Ablavatski, Ivan Grishchenko, and Matthias Grundmann.

Abstract

We present an end-to-end neural network-based model for inferring an approximate 3D mesh representation of a human face from single camera input for AR applications. The relatively dense mesh model of 468 vertices is well-suited for face-based AR effects. The proposed model demonstrates super-realtime inference speed on mobile GPUs (100-1000+ FPS, depending on the device and model variant) and a high prediction quality that is comparable to the variance in manual annotations of the same image.

Paper

For Third Workshop on Computer Vision for AR/VR at CVPR 2019

June 17, 2019, Long Beach, CA.

CV4AR_Real_time_Facial_Surface_geometry_from_Monocular_Video_on_Mobile_GPUs.pdf

Poster


Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs

Google Research Blog

Friday, March 8, 2019