Computational Control Theory
Course Description & Basic Information
A theoretical introduction to control theory and reinforcement learning, focusing on continuous state spaces and applications from the physical world and robotics. We emphasize computationally efficient algorithms and provable bounds. A special focus will be the new methodologies of non-stochastic control and regret minimization in RL. We will compare and contrast with classical methodology in the field.
The course exercises and projects will require coding in python.
The course is open to all students, but a strong mathematical background is required.
Textbook and readings
1. Dynamic Programming and Optimal Control, by Dimitri Bertsekas
2. Underactuated Robotics by Russ Tedrake
3. Reinforcement learning: an introduction, by Richard S. Sutton, Andrew G. Barto
4. Reinforcement Learning: Theory and Algorithms (draft), by Alekh Agarwal, Nan Jiang, Sham M. Kakade
5. Bandit Algorithms, by Tor Lattimore and Csaba Szepesvari
1. Introduction to Online Convex Optimization, by E. Hazan, available here
2. Boosting: Foundations and Algorithms, by R. E. Schapire and Y. Freund
3. Lecture Notes: Optimization for Machine Learning, by E. Hazan, available here
Lectures: Wed, 13:30-16:30 in TBD
Professor: Elad Hazan, COS building 409, Office hours: TBD , or by appointment.
Requirements: This is a graduate-level course that requires significant mathematical background.
Required background: probability, discrete math, calculus, analysis, linear algebra, algorithms and data structures, theory of computation / complexity theory
Recommended: linear programming, mathematical optimization, game theory
Attendance: Attendance is required at all lectures. Class participation will be included in the final grade.
Grading and collaboration
Grading: TBD in class 1