GENERATIVE MODELING

Mathematical models capable of generating instances from high-dimensional and complex distributions in a as-mush-as-possbile controllable manner have revolutionized the AI research. Two of the most common and efficient generative approaches are Variational Autoencoders (VAE) and Generative Adversarial Networks (GAN). Focusing on GANs, it is well-documented that their training often fails due to mode collapse, the fact that the optimal solution is a saddle point as well as limited mathematical understanding of the optimization process.

My research aims to devise new generative adversarial netowrks with improved convergence properties using suitable objective functions and function spaces, to develop novel theory to improve stability and better understand the training of GANs and apply adversarial training to generate realistic images, perform voice conversion and enhance speech quality.