Deep Unsupervised Learning

Spring 2020

About: This course will cover two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning. Recent advances in generative models have made it possible to realistically model high-dimensional raw data such as natural images, audio waveforms and text corpora. Strides in self-supervised learning have started to close the gap between supervised representation learning and unsupervised representation learning in terms of fine-tuning to unseen tasks. This course will cover the theoretical foundations of these topics as well as their newly enabled applications.

If you want to peek ahead, this semester's offering will be very similar to last year's offering.

Instructors: Pieter Abbeel, Peter Chen, Jonathan Ho, Aravind Srinivas

Teaching Assistants: Alexander Li and Wilson Yan

Communication: https://piazza.com/berkeley/spring2020/cs294158

Lectures: Wednesdays 5-8pm (first lecture on 1/22) in 250 Sutardja Dai Hall

Prerequisites: significant experience with probability, optimization, deep learning

Office Hours

(starting week of 1/27)

For homework, TA office hours are the best venue. For other questions (lecture, final project, research, etc.) any office hours should be great fits.

Pieter: Thursdays 6:30-7:30pm (242 Sutardja Dai Hall)

Alex: Mondays 5-6 pm, Tuesdays 11-noon (Soda 326)

Wilson: Wednesdays 3-4pm (Cory 557), Fridays 2-3pm (Soda 347)

Homework (subject to change)

Homework policy here

HW1: Autoregressive Models (out 1/29, due 2/11) [solutions]

HW2: Flow Models (out 2/12, due 2/25) [solutions]

HW3: Latent Variable Models (out 2/26, due 3/10) [solutions]

HW4: GANs / Implicit Models (out 3/11, due 3/31) [solutions]


Study handout here

During lecture slot on 4/22

Final Project

See final project page for details.

Main dates: March 2nd Project Proposals; March 9th Approved Project Proposals; April 13th 3-page Milestone; Week of May 11th Presentation; May 15th Report


60% Homework (15% each homework)

10% Midterm

30% Final Project

Letter grade breakdown

Tentative Schedule / Syllabus

[pdf, gslides, youtube, n/a] L1 (1/22) Intro

[pdf, gslides, youtube, colab] L2 (1/29) Autoregressive Models

[pdf, gslides, youtube, colab] L3 (2/5) Flow Models

[pdf, gslides, youtube, colab] L4 (2/12) Latent Variable Models

[pdf, gslides, youtube, colab] L5 (2/19) Implicit Models / Generative Adversarial Networks

[^ , ^ , youtube, ^] L6 (2/26) Implicit Models / Generative Adversarial Networks (ctd) + Final Project Discussion

[pdf, gslides, youtube, colab] L7 (3/11) Self-Supervised Learning / Non-Generative Representation Learning

[pdf, gslides, youtube] L8 (3/18) Strengths and Weaknesses of Unsupervised Learning Methods Covered Thus Far

[pdf, gslides, youtube] L9 (4/1) Semi-Supervised Learning; Unsupervised Distribution Alignment

[pdf, gslides, youtube] L10 (4/8) Compression

[pdf, gslides, youtube] L11 (4/15) Language Models -- Guest Instructor: Alec Radford (OpenAI)

(4/22) Midterm

[pdf, gslides, youtube] L12 (4/29) Representation Learning in Reinforcement Learning

(5/6) RRR Week (no lecture)

(5/13) Final Project Presentations + Final Project Reports due


Q: How do I get into this course?

A: Please fill out this survey, which we will use for admissions.

Q: Can undergraduates take this course?

A: This course is targeted towards a PhD level audience. But certainly exceptional undergraduates could be good fits, too, and your ability to take this course is not directly affected by your grad/undergrad student status, but by things we 'll evaluate from the survey.

Q: Is this a real course or a seminar?

A: This is a real course. Instructors will give most of the lectures. There will be substantial homework. There will be a midterm. There will be a substantial final project.

Q: I already want to start learning now, what can I do, can you point me to some research papers maybe?

A: Certainly, here is a zip file with about 100 papers much on-topic for this course, happy readings!!