ABSTRACT

With the advent of high-fidelity monitors and streaming services, consumers today want the best visual graphics, be it for gaming or videos. Sending high-resolution images directly to the consumers leads to a lot of utilization of bandwidth. This issue can be resolved through super-resolution. If a low-resolution image is converted to a high-resolution image at the consumer end, then this can lead to saving a lot of bandwidth. We have deployed two different deep-learning techniques, Autoencoders and Generative Adversarial Networks, to perform the super-resolution task. Three different datasets have been used to observe how these two techniques work in terms of quality of the output image (by employing peak signal-to-noise ratio) and computation complexity. The main idea behind the project is to examine the dependence of the inverse reconstruction function (used to transform a low-resolution image to a high-resolution image) on the dataset that each model is trained on. We also discuss the challenges faced in working with different datasets and training different architectures. We have observed cross-database testing as well, to see if the inverse function learned by the model is dependent on the data that it is trained on or not.