In this course we will give students broad exposure to a diverse set of active research topics in machine learning. We will discuss recent papers across a range of advanced subjects (Probabilistic Machine Learning, ML Applications in Science, and Causality) which are the current research interests of the instructors. Then students can select one of these subjects to explore in-depth in the form of a course project. After the first lecture, each week two students will lead the discussion of the papers chosen for that week. Ideally the presentations will motivate the area, put the work in context and describe the innovations from the papers. Then we will discuss the work as a group. One instructor will help facilitate the discussion and provide guidance about the content of the presentations.
Steve Yadlowsky, Ben Adlam, Zelda Mariet, Zi Wang, Alexander D'Amour, David Belanger, Jasper Snoek, Alex Wiltschko.
Course correspondence will be through edstem.
SEC 2.118, Fridays 12:45 - 3:30
Class participation - 30%
Class presentations - 20%
Project proposal - 10% - 3/15
Project presentation - 10% - 4/19
Project report and code - 30% - 5/3
Each class meeting will be three hours of in-depth discussion on a specific topic. Two students will present papers each week, and each student is expected to facilitate a discussion 1-2 times per semester. The presenters for each week are expected to coordinate with each other and with the course instructors in advance to divide up the assigned papers and any additional background material that will need to be discussed.
Discussions will center around:
Understanding the strengths and weaknesses of these methods.
Understanding the relationships between these methods, and with previous approaches.
Extensions or applications of these methods.
Experiments that might better illuminate their properties.
The ultimate goal is that these discussions will reveal gaps in current research, generate new ideas, and ideally generate novel research directions.
Students can work on projects individually or in pairs. The goal of the projects is to allow students to dive more deeply into one of the topics of the course. The project can be an extension of existing work, a novel application using existing methods, exploration of a new research idea or non-trivial implementation and experimentation using existing methods. The grade will depend on the ideas, how well you present them in the report, how clearly you position your work relative to existing literature, how illuminating your experiments are, and how well-supported your conclusions are.
Each group of students will write a short (around 2 pages) research project proposal, which ideally will be structured similarly to a standard paper. It should include a description of a minimum viable project, some nice-to-haves if time allows, and a short review of related work. You don't have to do what your project proposal says - the point of the proposal is mainly to have a plan and to make it easy for me to give you feedback.
Towards the end of the course, everyone will present their project in a short presentation.
At the end of the class you'll hand in a project report (around 4 to 8 pages), prepared in the format of a machine learning conference paper such as NeurIPS or ICML. Note, we do not expect the report to be a completed research paper of that caliber but hopefully some projects will be a first step in that direction.
Unfortunately, we expect this course will be oversubscribed and we must prioritize who can attend. As such, we will prioritize graduate students whose research interests will benefit from the material, but are open to other qualified and interested candidates. As an advanced course where we will discuss recent literature at a technical level, we do expect significant math, statistics and machine learning background. Note that the deadline to enroll in limited enrollment courses is Jan 18 and that we will notify you by end of Jan 19.
Please fill submit your statement using this form.
For course presentations, the presentations can be completed independently but we encourage students to collaborate to ensure there is less redundancy and overlap (e.g. by having similar introductions to a topic). However, we expect both students to each contribute substantially to the week's presentations. For the project, if students choose to work together, we will ask for a statement detailing the individual contributions of each student.
In the case of a student in isolation because of Covid, sessions will be recorded and the recording made available on Canvas. In the case of an instructor who needs to isolate, we anticipate changing the schedule so that we can maintain continuity in the classroom. In the extreme case, we may need to shift to remote learning.
Sign-up spreadsheet here. Sign up by 12:45 pm on Friday Feb 4, 2022.
Jan 28: Intro
Preview of Bayesian Models (Lecture Slides: See below)
Feb 4: Bayesian Models
Paper 1: How Good is the Bayes Posterior in Deep Neural Networks Really?, Wenzel et al., ICML 2020
Paper 2: Training independent subnetworks for robust prediction. Havasi et al., ICLR 2021
Preview of Gaussian Processes, Bayesian Optimization and Active Learning
Feb 11: Gaussian Processes, Bayesian Optimization and Active Learning
Paper 1: Scalable Bayesian Optimization Using Deep Neural Networks
Paper 2: Robots that can adapt like animals
Paper 3: Active Learning Literature Survey (page 1-21)
Preview of Probabilistic Models of Diversity
Feb 18: Probabilistic Models of Diversity
Paper 1: Learning under Model Misspecification: Applications to Variational and Ensemble methods
Paper 2: Diversity is All You Need: Learning Skills without a Reward Function
Preview of Robustness to "Spurious Correlations"
Feb 25: Causality & Robustness to "Spurious Correlations"
Paper 3: Counterfactual Invariance to Spurious Correlations: Why and How to Pass Stress Tests
Preview of Causal Effect Identification and Estimation
Mar 4: Causal Inference I: Fundamentals (Final project proposal soft deadline)
Paper 1: Quasi-Oracle Estimation of Heterogeneous Treatment Effects
Paper 3 (optional, no presentation): Adapting Neural Networks for the Estimation of Treatment Effects
Review of ML for Causality and Evaluation Metrics
Mar 11: Causal Inference II: Connections and Applications (Final project informal proposal due)
Paper 1: Off-Policy Evaluation via the Regularized Lagrangian
Paper 2: Combining Experimental and Observational Data to Estimate Treatment Effects on Long Term Outcomes
Paper 3 (optional, no presentation): RieszNet and ForestRiesz: Automatic Debiased Machine Learning with Neural Nets and Random Forests
Preview of Deep Learning Theory
Mar 25: Deep Learning Theory
Apr 1: ML for Proteins
Paper 1: Highly accurate protein structure prediction with AlphaFold
Paper 2: Deep Extrapolation for Attribute-Enhanced Generation
Apr 8: ML for Chemistry
Paper 3: Pursuing a Prospective Perspective
Apr 15: Panel Discussion & Project Updates (attendance expected)
Apr 18: Final project posters due (so that they can be printed by the 22nd)
Apr 22: Final project poster session (attendance expected)
A project can contribute in any of these areas (or a combination of them):
Methods: systematic assessment of the strengths and weaknesses of a collection of novel or existing methods when applied to real or synthetic data.
Applications: use of machine learning to help solve a real-world problem.
Theory: formal statements concerning guarantees about machine learning problems or methods.
Exposition: presentation of a unified framework covering a set of existing theory or methods. The goal is to help provide accessible educational content to others as well as identify opportunities for development of novel methods and theory.
Software: development of machine learning tools that are fast, general-purpose, and well-tested.
When evaluating your projects, we will be focusing on the following criteria:
Are your technical statements precise and correct?
Did you properly cite related work and explain the background concepts?
Given your specific machine learning background, did the work stretch you outside of your comfort zone?
Is your write-up well-written and was your presentation engaging? If your project is software-based, was your code high-quality and reusable?
If working as part of a team, did you collaborate effectively?
Note that projects do not necessarily need to focus specifically on the subjects discussed in the class, though should be relevant to recent advances in machine learning.
Some example projects
Implement a few MCMC methods and compare their performance on a variety of low-dimensional inference problems as well as for Bayesian deep nets. Here, it may be best to use synthetic data, so that properties of the problem can be tuned.
Apply Thompson sampling or GP-based Bayesian optimization to a black-box optimization problem in biology. In the interest of time/money, it could be run on a software-defined fitness function, rather than by doing actual experiments.
Derive regret bounds for various bandit algorithms.
Write a tutorial covering the breadth of recent advancements in variational inference.
Implement your own autodiff library from scratch, or contribute new features or example applications to jax . For example, you could develop a user-friendly library for MCMC in jax. This blog post is an excellent example of such a project.