Probabilistic Machine Learning

The PML course will cover a range of probabilistic methods in Machine Learning. It is an advanced topics course, and the exact content will differ from year to year, but it will contain subjects like:

  • Frequentist vs Bayesian perspectives on probabilities

  • Graphical models

  • Deep Generative Models

  • Gaussian Processes

  • Inference techniques

  • Probabilistic Programming

The course assumes that you are intimately familiar with vector and matrix operations, have had basic training in probability calculus, such that you are familiar with concepts such as joint and conditional probability distributions and Bayes Theorem, and standard information theoretical tools such as Entropy and relative Entropy (Kullback Leibler divergence). We also expect you to have a good understanding of differentiation, and how gradients can be used in the optimization of functions. Finally, we assume that you have completed a basic course in Machine Learning. This means we will assume you are familiar with basic concepts (supervised vs unsupervised learning), generalization (including practices such as train/validation/test splits and cross validation). You are also expected to have practical experience with deep learning frameworks such as Pytorch or Tensorflow to the extent that you are able to write code to train a simple autoencoder in one of these frameworks. Finally, note that we will be using the Python programming language for the exercises in the course.

To make these expected prerequisites more precise, we have prepared a self-assessment exercise that we encourage you to complete in order to test whether you have the necessary qualifications for the course. If you have difficulties solving it, check our preparing yourself for the course page.