Editable Generative Adversarial Networks:
Generating and Editing Faces Simultaneously

Editable Generative Adversarial Networks: Generating and Editing Faces Simultaneously

People

Kyungjune Baek, Duhyeon Bang and Hyunjung Shim

Abstract

We propose a novel framework for simultaneously generating and manipulating the face images with desired attributes. While the state-of-the-art attribute editing technique has achieved the impressive performance for creating realistic attribute effects, they only address the image editing problem, using the input image as the condition of model. Recently, several studies attempt to tackle both novel face generation and attribute editing problem using a single solution. However, their image quality is still unsatisfactory. Our goal is to develop a single unified model that can simultaneously create and edit high quality face images with desired attributes. A key idea of our work is that we decompose the image into the latent and attribute vector in low dimensional representation, and then utilize the GAN framework for mapping the low dimensional representation to the image. In this way, we can address both the generation and editing problem by learning the generator. For qualitative and quantitative evaluations, the proposed algorithm outperforms recent algorithms addressing the same problem. Also, we show that our model can achieve the competitive performance with the state-of-the-art attribute editing technique in terms of attribute editing quality.

Proposed structure and Contributions

1) Our algorithm can generate realistic arbitrary faces as well as input faces with desirable multi-attributes.

2) Owing to the attractive nature of GAN latent space, we can easily identify a novel attribute subspace by analyzing the GAN latent space. As a result, our model can be used, without re-training, for manipulating various other attributes, which are not used for training the attribute classifier.

3) Our model is more flexible to structural variations in attributes such as poses because image level information is not transferred to the output. (e.g., skip connections)

4) We can control the degree of attribute effects without additional training or information.

Editing Results

Generation Results

Acknowledgements

This work was supported by ICT R&D program of MSIP/IITP. [ R7124-16-0004, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding ], the MIST(Ministry of Science and ICT), Korea, under the “ICT Consilience Creative Program” (IITP-2018-2017-0-01015) supervised by the IITP(Institute for Information & communications Technology Promotion),the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the MSIP (NRF-2016R1A2B4016236), and the Ministry of Science and ICT, Korea (2018-0-00207, Immersive Media Research Laboratory)


Publication

Editable Generative Adversarial Networks: Generating and Editing Faces Simultaneously

Kyungjune Baek, Duhyeon Bang and Hyunjung Shim, Asian Conference on Computer Vision (ACCV), Oral presentation, 2018, Dec.