Neural face reenactment involves transferring the facial pose, which includes the rigid 3D face/head orientation and the flexible facial expression, from a target facial image to a source facial image. This technology plays a crucial role in generating realistic digital head avatars that have numerous applications in telepresence, Augmented Reality/Virtual Reality (AR/VR), and the creative industries.
We propose the Dual-Generator (DG) network for largepose face reenactment. Given a source face and a reference face as inputs, the DG network can generate an output face that has the same pose and expression as the reference face, and has the same identity as the source face.
Dual-Generator (DG) network is composed of two generators, ID-preserving Shape Generator (IDSG) and Reenacted Face Generator (RFG).
➢ The IDSG transfers the reference’s pose and expression to the source in shape space, and generates 3D target landmark estimate.
➢ The RFG fuses the 3D target landmark estimate and the source face to generate the reenacted face.
The source code can be downloaded from here.