IFT 6135 - Representation Learning

Course Lectures

22 – Odds and Ends (15/04/2020)

In this lecture we will briefly discuss a few of the important emerging topics in the area of representation learning. All of these really deserve a full lecture devoted to them, but here we will endeavour to introduce the central ideas. Areas that I hope to cover:

  • self-supervised learning
  • adversarial domain adaptation
  • multimodal machine learning (specifically text and vision)
  • systematic generalization

Video:

Slides:

20 – Graph Representation Learning (06/04/2020)

In this lecture, Will Hamilton will discuss the area of graph representation learning. We will introduce standard techniques for learning low-dimensional embeddings of graph data, as well as the graph neural network (GNN) framework.

Video:

Slides:

Reference:

19/21 – GANs (01/04/2020 & 08/04/2020)

In this lecture, we will discuss Generative Adversarial Networks (GANs). GANs are a recent and very popular generative model paradigm. We will discuss the GAN formalism, some theory and practical considerations.

Video:

Slides:

Reference: (* = you are responsible for this material)

18 – Normalizing Flows (30/03/2020)

In this lecture, we will finish the inference sub-optimality part of the VAE lecture, have a crash course on Normalizing Flows, and see how they can be used (1) as a generative model by inverting the transformation of the data distribution into a prior distribution, and (2) to reduce the approximation gap of VAEs by using a more flexible family of variational distributions.

Video: Chin-Wei's Lecture

Slides: Normalizing Flows

Reference: (* = you are responsible for this material)

17 – Variational Autoencoders (cont.) (25/03/2020)

In this lecture, Chin-Wei continues his talk about a family of latent variable models known as the Variational Autoencoders (VAE).

Video: Chin-Wei's Lecture

Slides (same as below -- lecture 15): Variational Autoencoders

16 – Meta-Learning - Hugo Larochelle (23/03/2020)

This will be the first of our online lectures (using zoom).

A lot of the recent progress on many AI tasks was enable in part by the availability of large quantities of labeled data. Yet, humans are able to learn concepts from as little as a handful of examples. Meta-learning is a very promising framework for addressing the problem of generalizing from small amounts of data, known as few-shot learning. In meta-learning, our model is itself a learning algorithm: it takes as input a training set and outputs a classifier. For few-shot learning, it is (meta-)trained directly to produce classifiers with good generalization performance for problems with very little labeled data. In this talk, I'll present an overview of the recent research that has made exciting progress on this topic (including my own) and, if time permits, will discuss the challenges as well as research opportunities that remain.

Video: Hugo's Lecture

Slides: Meta-Learning slides

15 – Variational Autoencoders (11/03/2020)

In this lecture, Chin-Wei will talk about a family of latent variable models known as the Variational Autoencoders (VAE). We’ll see how a deep latent gaussian model can be seen as an autoencoder via amortized variational inference, and how such an autoencoder can be used as a generative model. At the end, we’ll take a look at variants of VAE and different ways to improve inference.


Slides:

Reference: (* = you are responsible for this material)


14 – Autoencoders and Autoregressive Generative Models (26/02/2020-11/03/2020)

In this lecture we will take a closer look at a form of neural network known as an Autoencoder. We will also begin our look at generative models with Autoregressive Models.

Slides:

Reference: (* = you are responsible for this material)

13 – Normalization Methods (24/02/2020)

will introduce a number of normalization techniques that have become very popular in training deep neural networks.

Slides and Notes:

Reference: (* = you are responsible for this material)

12 – Regularization (17/02/2020 - 19/02/2020)

In these lectures, we will have a rather detailed discussion of regularization methods and their interpretation.

Slides:

Reference: (* = you are responsible for this material)

11 – Optimization (12/02/2020)

In this lecture, we will discussion both popular and practical first-order optimization methods. We will not discuss but I do provide slides for some second-order methods and their interpretation.

Slides:

Reference: (* = you are responsible for this material)

10 – Object Detection and Segmentation (10/02/2020)

In this lecture Sai Rajeswar will discuss on applications of deep learning for computer vision tasks such as Object Detection and segmentation. These are some of the core problems in computer vision and we will see how deep convolutional networks can be use used to address these tasks in a more efficient manner, both in terms of performance and speed. We will develop some fundamental intuitions and then look into few key object detection and segmentation models such as SSD, YOLO, Faster-RCNN and Mask-RCNN.

Slides:

Reference:

09 – Self-Attention and Transformer (05/02/2020)

In this talk Arian Hosseini will look at self-attention and the transformer model. We will see how they work, dig deep into them, see analysis and performances, and their applications mainly in natural language processing. We will see how some language models, based on transformer architecture, have surpassed human performance on some language understanding tasks, and we will also discuss their shortcomings.

Slides:

Reference:

08 – Attention (03/02/2020)

In this lecture prepared by Dzmitry (Dima) Bahdanau, I will discuss attention in neural networks.

Slides:

Reference: (* = you are responsible for this material)

07 – Sequential Models (29/01/2020)

In this lecture we introduce Recurrent Neural Networks and related models.

Lecture 08 RNNs (slides derived from Hugo Larochelle)

Reference: (* = you are responsible for this material)

06 – ConvNets II (27/01/2020)

Today we conclude our discussion of convolutional neural networks.

Lecture 04 CNNs II (Slides are from Hiroshi Kuwajima’s Memo on Backpropagation in Convolutional Neural Networks.)

Reference: (* = you are responsible for this material)


05 – PyTorch Tutorial (20/01/2020)

In this lecture, Chin-Wei will give a tutorial on PyTorch. You are encouraged to bring your laptop to class to practice.

The Colab notebooks for the tutorial can be found in the link below:

https://drive.google.com/open?id=1fmZuTTm-HzN3x09fmZaotgkjORwqyFrx


We will cover

  • torch tensors,
  • how to do backprop,
  • torch.nn (modules) and how to build your own,
  • training loop


04 – ConvNets I (15/01/2020 and 22/01/2020)

In this lecture we finish up our discussion of training neural networks and we introduce Convolutional Neural Networks.

Lecture 03 CNNs (some slides are modified from Hugo Larochelle’s course notes)

Reference: (* = you are responsible for all of this material)

03 – Training NNets & ML Problems (13/01/2020-15/01/2020)

In these lectures we continue with our introduction to neural networks and we will discuss how to train neural networks: i.e. the Backpropagation Algorithm

Lecture 02 training NNs (slides modified from Hugo Larochelle’s course notes)

Machine learning problems (Delayed: slides from Hugo Larochelle's CIFAR DLSS 2019 lectures)

Reference: (you are responsible for all of this material)

  • Chapter 6 of the Deep Learning textbook (by Ian Goodfellow, Yoshua Bengio and Aaron Courville).

01 & 02 – Introduction & Review (06 / 01 / 2020 - 08/01/2020)

The first class is January 7th, 2019. We discuss the plan for the course and the pedagogical method chosen. In this lecture we will also begin our detailed introduction to Neural Networks.

Lecture 01 artificial neurons (slides from Hugo Larochelle’s course notes)

Reference: (you are responsible for all of this material)

  • Chapter 6 of the Deep Learning textbook (by Ian Goodfellow, Yoshua Bengio and Aaron Courville).

00 – Review / Background Material (06 / 01 / 2020)

Review of some foundational material, covering linear algebra, calculus, and the basics of machine learning.

Lecture 00 slides (slides built on Hugo Larochelle’s slides)

Reference:

  • Chapters 1-5 of the Deep Learning textbook (by Ian Goodfellow, Yoshua Bengio and Aaron Courville).