Theoretical Machine Learning

Course Description & Basic Information

Can the mechanism of learning be automated and implemented by a machine?

In this course we formally define and study various models that have been proposed for learning. The course will present and contrast the statistical, computational, game-theoretic and reinforcement models for learning. We will present and rigorously analyze some of the most successful algorithms in machine learning that are extensively used today. Course topics include: intro to statistical learning theory and generalization error bounds; learning in adversarial settings and the on-line learning model; mathematical optimization in machine learning; learning with partial observability; control theory and reinforcement learning; provable methods for deep learning .

Textbook and readings

Textbooks:

  1. Introduction to Online Convex Optimization, Second Edition (forthcoming MIT Press), by E. Hazan, available here

Extra reading:

2. Prediction, Learning and Games, by N. Cesa-Bianchi and G. Lugosi

3. Understanding Machine Learning: From Theory to Algorithms, by Shai Shalev-Shwartz and Shai Ben-David

4. Boosting: Foundations and Algorithms, by R. E. Schapire and Y. Freund

5. Reinforcement Learning: Theory and Algorithms (draft), by Alekh Agarwal, Nan Jiang, Sham M. Kakade

Lecture notes from related courses:

1. Lecture Notes: Optimization for Machine Learning, by E. Hazan, available here

2. Lecture Notes: Computational Control Theory, by E. Hazan, available here

3. Lecture Notes on modern convex optimization, by A. Ben-Tal and A. Nemirovski, available here

Mathematical background notes

Notes by Ted Summers from previous years on background for students that have not taken the complete math requirements:

notes on convex optimization and analysis ; part 2 of these notes

Administrative Information

Lectures: Wed, 09:00-12:00 in COS building, room 105

Professor: Elad Hazan, CS building 409, Office hours: Wednesday 14:00-15:00, or by appointment.

Teaching Assistants: Nataly Brukhim, CS building , Office hours: , or by appointment

Discussion channel: please see the canvas system for this course, which has an Ed channel.

Requirements: This is a graduate-level proof-based course that requires significant mathematical background.

Required background: probability, discrete math, calculus, analysis, linear algebra, algorithms and data structures, theory of computation / complexity theory

Recommended: linear programming, mathematical optimization, game theory

Attendance and the use of electronic devices: Attendance is expected at all lectures. The use of laptops and similar devices for note-taking is permitted. Food is not allowed, but hot/cold beverages are allowed.

Grading and collaboration

Grading: homework assignments (60%), class participation (10%), and a final take home exam (30%). There will be 2-6 homework assignments, which will be roughly equally spaced, and will each consist of a small number of problems (optional programming). Credit for all assignments is the same.

Additional credit is given for constructive participation in class and solution of occasional "bonus problems" presented in class. Points may be deducted for extensive no-show to class.

Turning in assignments and late policy: Coordinate submission of assignments with the TA. Every late day reduces the attainable credit for the exercise by 10%.

Scribe notes: These are optional, but are graded, and can give extra. Every student that is interested will have the option of writing notes.

Collaboration: Collaboration on the problem sets is allowed. Final exam is individual. The people you collaborated with on assignments should be clearly detailed: before the solution to each question, list all people that you collaborated with on that particular question.