Overview and Scope

Deep generative models (DGMs) have become an important research branch in deep learning in machine learning and deep learning. DGMs include a broad family of methods such as generative adversarial networks (GANs), variational autoencoders (VAEs), and autoregressive (AR) models. These models combine the advanced deep neural networks with classical density estimation (either explicit or implicit) for mainly generating synthetic data samples. Although these methods have achieved state-of-the-art results in the generation of synthetic data of different types such as images, speech, text, molecules, video, etc., Deep generative models are still difficult to train.

There are still open problems, such as the vanishing gradient and mode collapse in GANs, which limit their performance. Although there are strategies to minimize the effect of those problems, they remain fundamentally unsolved. In the last years, evolutionary computation (EC) and related techniques (e.g. particle swarm optimization) and in the form of Evolutionary Machine Learning approaches have been successfully applied to mitigate the problems that arise when training DGMs, leveraging the quality of the results to impressive levels. Among other approaches, these new solutions include GAN, VAE, and AR training methods based on evolutionary and coevolutionary algorithms, the combination of deep neuroevolution with training approaches, and the evolutionary exploration of the latent space.

This workshop aims to act as a medium for debate, exchange of knowledge and experience, and encourage collaboration for researchers focused on DGMs and the EC community. Bringing these two communities together will be essential for making significant advances in this research area. Thus, this workshop provides a critical forum for disseminating the experience in the topic of enhancing generative modeling with EC, to present new and ongoing research in the field, and to attract new interest from our community.