Welcome to the class CS 8803: "Dynamics to Algorithms: Optimization, Sampling, and Games"
Offered at the Georgia Institute of Technology, Fall 2025
Instructor: Andre Wibisono
Class time: Fridays 11:00 am - 1:45 pm (starting Aug 22, 2025)
Class location: Ford Environmental Science & Technology (building 147) room L1105
Description: Algorithms are the fundamental building blocks of computation. In practice, algorithms run in discrete time. However, many algorithms can be derived as discretization of continuous-time dynamical systems. Studying the dynamics in continuous time are useful to highlight the underlying properties that make them work for solving the problems, and to guide the development and analysis of algorithms in discrete time. In this course, we will survey continuous-time dynamics to solve fundamental problems in three domains: Optimization, Sampling, and Games, motivated by applications in computer science and machine learning. We will study the convergence guarantees of the dynamics in continuous time, and how to implement the dynamics as discrete-time algorithms while translating the desired convergence guarantees (or understand why the translation fails).
Target audience: Students in computer science, mathematics, ISYE, and related fields who are interested in the design and analysis of algorithms from a continuous-time perspective. Students should have strong mathematical foundations. No prior knowledge of dynamics (ordinary/partial/stochastic differential equations) is required, but helpful to have. Basic knowledge of algorithms (e.g. gradient descent) is recommended.
Workload: Attendance and participation at weekly lecture; light (bi)weekly assignments; mini research project and presentation on topics related to dynamics and algorithms.
Questions? Please email Andre Wibisono at andre.wibisono@yale.edu
Tentative topics:
Optimization:
Gradient flow and discretization: gradient descent, proximal method, forward-backward method
Accelerated gradient flow (heavy ball), accelerated gradient descent
Natural gradient flow (on Hessian manifold), mirror flow, mirror descent
Wasserstein gradient flow (on the space of probability distributions)
Hamiltonian flow for optimization
Sampling:
Langevin dynamics and discretization: Unadjusted Langevin algorithm, proximal sampler
Mirror Langevin dynamics and discretization
Kinetic (underdamped) Langevin dynamics and discretization
Hamiltonian Monte Carlo and discretization
Diffusion models and discretization
Games:
Min-max gradient flow and discretization: min-max gradient descent, proximal, alternating
Min-max mirror flow and discretization: min-max mirror descent, proximal, alternating
Min-max Langevin dynamics and discretization for games on the space of distributions
Techniques and tools:
Descent lemma and energy identity for greedy flows/methods
Lyapunov functions for accelerated/kinetic methods
Principle of least action, Euler-Lagrange equations
Continuity equations (density evolutions for differential equations)
Fokker-Planck equations (density evolutions for stochastic differential equations)
Geometry of Wasserstein space, Otto calculus, optimal transport
Isoperimetric inequalities (Poincaré and log-Sobolev inequalities)
Time-reversal of stochastic processes
Data processing inequalities
Topics may be adjusted based on students' interests. Students are encouraged to pursue research or review projects on related topics that are not covered in class