Probabilistic Graphical Model


Dr. Sen-ching Cheung


Office hours: Make an appointment at


Regular class: WF 9:00am-10:15am at FPAT 460 (NEW TIME! NEW ROOM!)

Final Examination: No Final

Course Description

A central tenant of any empirical sciences is to construct probabilistic models for prediction and estimation based on available data. The enormous advances in computing, sensing and networking technologies provide us with an unprecedented capability in collecting and storing an inordinate amount of data. Much of these data are noisy, inter-related and high-dimensional. One particular framework has gradually emerged as the most appropriate tool to handle uncertainty and to build complex models in a modular and algorithmic fashion. This framework is the Probabilistic Graphical Model, the focus of the EE639 course this semester. Probabilistic graphical models are probabilistic models that encode local information of conditional independence among a large number of random variables. By choosing an appropriate (sparse) graph to describe the data, powerful and rigorous techniques exist to perform prediction, estimation, data-fusion as well as handling uncertainty and missing data. Many classical multivariate systems in pattern recognition, information theory, statistics and statistical mechanics are special cases of this general framework – examples include hidden Markov models, regression, mixture models, Kalman filters and Ising models. In this course, we will study how graph theory and probability can be elegantly combined under this framework to represent many commonly-used probabilistic tools, and how they can be applied in solving many practical problems. We will put equal emphasis on both theoretical understanding of the subject and practical know-how on using it in real applications. The grading is based on three components: homework assignments throughout the semester, two midterms and a final project.

Tentative Topics

  1. Mathematical Preliminaries and Probabilistic Reasoning
  2. Overview of Graphical Models
  3. Efficient Inference in Trees
  4. Junction Tree Algorithm
  5. Basic Machine Learning Concepts
  6. Learning as Inference
  7. Learning with Hidden Variables
  8. Nearest Neighbor Classification
  9. Linear Dimension Reduction
  10. Mixture Models
  11. Discrete-State Markov Models
  12. Continuous-State Markov Models
  13. Approximate inference with Sampling
  14. Approximate inference with Variational techniques


  1. D. Barber, Bayesian Reasoning and Machine Learning, Cambridge, 2012 (Required)
  2. D. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques, MIT Press, 2009. (Optional)
  3. Additional reading materials and programming examples will be distributed throughout the semester.

Course Policy

  • Homework will be assigned throughout the semester and solutions will be provided. Late homework will not be accepted.
  • Two midterms will be given. Makeup midterms will only be given to students with documented excuse absences.
  • Each student must complete a final project applying graphical models to research problems. All findings will be presented in a poster session and summarized in a final report.
  • Each student must complete all work by her or his own efforts. Any form of cheating and/or plagiarism on graded material will not be tolerated. Offenses will be prosecuted according to University of Kentucky’s STUDENT RIGHTS AND RESPONSIBILITIES.



  • Required: Undergraduate level linear algebra (matrix operations), multivariate calculus, probability (random variables, discrete and continuous, especially Gaussian, distributions) and statistics (confidence level and hypothesis testing).
  • Desired: Basic understanding of graph theory, stochastic processes, information theory and optimization.
  • Students will need to be familiar with Matlab, S+, R or a related matrix-oriented programming language.


30% - Homework, 20% - Midterm 1, 20% - Midterm 2, 30% - Final Project