Deep Learning without Annotations
Deep neural networks have become a popular machine learning tool, supported by their outstanding performance in a diverse range of tasks that started in image classification and machine translation, but that now spread until protein structure prediction or the control of nuclear fusion experiments. The initial approaches for estimating the millions of parameters in neural networks required the collection of large amounts of labels, which implied a large annotation cost and may replicate the biases of the annotators. This short course will focus on two approaches that do not require annotations: self-supervised and generative modelling. Self-supervised learning defines pretext tasks that allow a much more efficient use of labels for the final downstream task. On the other hand, generative models focus on learning the distribution of data to and produce new samples which, in turn, can also be used to pre-train a neural model before its downstream task.
9:00 Self-supervised learning - Xavier Giró
10:00 Generative Adversarial Networks (GANs) - Xavier Giró
10:30 Coffee break
11:00 Lab: GAN - Laia Tarrés
11:30 Variational Autoencoders (VAE) - Xavier Giró
12:00 LAB: VAE - Laia Tarrés
12:30 End