Bayesian Machine Learning


Dr. Sen-ching Cheung


Office hours: Make an appointment at

Course Description

Modern machine learning techniques, especially those based on deep neural networks, have achieved superior performances across a wide spectrum of applications from autonomous driving, drug discovery to game development. Many of these ML techniques have a large number of parameters and hyperparameters, require an enormous amount of labeled data, and are difficult to incorporate prior knowledge or uncertainty quantification in their training. In this course, we will cover modern ML techniques from a Bayesian probabilistic perspective. Bayesian approaches excel in modeling uncertainty and prior knowledge, resulting in a powerful and consistent framework to solve common challenges encountered in ML including handling missing data, parameter estimation, model comparison, and model compression. This course will cover a wide range of topics including probabilistic graphical models, latent variable models, approximate inference, Bayesian neural networks, and generative models. The learning objective is to build the theoretical foundations and hands-on knowledge of Bayesian methods so that students can begin to apply these powerful techniques to their own research.

Tentative Topics

  1. Probabilistic Reasoning and Graphical Models
  2. Belief and Markov Networks
  3. Parameter Learning as Inference
  4. Statistics for Machine Learning
  5. Learning with latent variables
  6. Approximate Inference with Monte Carlo methods
  7. Approximate Inference with Variational methods
  8. Bayesian Neural Networks
  9. Variational Autoencoder & Variational Dropout
  10. Bayesian non-parametric methods: Gaussian and Dirichlet Processes


  1. D. Barber. Bayesian Reasoning and Machine Learning (Links to an external site.), Cambridge University Press, 2012 (electronic version, updated in 2017, is required)
  2. L. Wasserman. All of Statistics, Springer, 2004 (optional, pdf (Links to an external site.))
  3. I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning (Links to an external site.), 2016.
  4. Selected papers provided by the instructor.

Course Policy

  • Homework will be assigned throughout the semester and solutions will be provided. Late homework will not be accepted.
  • Two midterms will be given. Makeup midterms will only be given to students with documented excuse absences.
  • Each student must complete a final project applying graphical models to research problems. All findings will be presented in a poster session and summarized in a final report.
  • Each student must complete all work by her or his own efforts. Any form of cheating and/or plagiarism on graded material will not be tolerated. Offenses will be prosecuted according to University of Kentucky’s STUDENT RIGHTS AND RESPONSIBILITIES.


This course is suitable for graduate students from electrical engineering, computer engineering or computer science who have taken undergraduate level linear algebra, multivariate calculus, probability, and machine learning. Most of the programming will be based on Python so its familiarity is assumed. Prior knowledge of deep learning is desirable, though not strictly necessary.

Course Load

  1. Homework will be assigned throughout the semester. They must be typed and submitted as a single PDF file through Canvas.
  2. Programming assignments will be assigned through Google Colab. (Links to an external site.) A complete assignment must contain proper documentation and runs.
  3. There will be an in-class midterm but no final examination.
  4. Each student will have 1-2 in-class presentations of his/her assigned papers.
  5. There will be a team-based final project based on substantial work of a topic selected by the team and approved by the instructor.
  6. The final project has both a final presentation and a report.


Attendance (5%), Homework (45%), In-class Presentation (10%), Midterm (15%), Final Presentation (10%), Final Report (15%)