Face Mesh
Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs
Yury Kartynnik, Artsiom Ablavatski, Ivan Grishchenko, and Matthias Grundmann.
Abstract
Abstract
We present an end-to-end neural network-based model for inferring an approximate 3D mesh representation of a human face from single camera input for AR applications. The relatively dense mesh model of 468 vertices is well-suited for face-based AR effects. The proposed model demonstrates super-realtime inference speed on mobile GPUs (100-1000+ FPS, depending on the device and model variant) and a high prediction quality that is comparable to the variance in manual annotations of the same image.
Paper
Paper
For Third Workshop on Computer Vision for AR/VR at CVPR 2019
June 17, 2019, Long Beach, CA.
![](https://www.google.com/images/icons/product/drive-32.png)