Abstracts

Lecture 1: Competitive analysis of online algorithms.

Lecturer: Christoph Dürr

Abstract: The online computation paradigm applies to situations where the input of a computational problem is provided in form of a request sequence to the algorithm. The algorithm has to serve each request with some action that will generate a cost depending on the current configuration. Classical examples include the ski-rental problem, the caching problem, the k-server problem and the TCP acknowledgement problem, as well as online versions of the classical combinatorial optimization problems. Interesting variants are the secretary problem and the cow path problem. The performance of an online algorithm is typically measured with the so-called competitive ratio. It compares the performance of the algorithm to the optimal offline solution, and measures the price of not knowing the future requests in advance. In this course, we will cover the algorithmic ideas that are useful for these problem, as well as adversarial constructions to show impossibility results. The ultimate goal of the course is to understand the primal dual framework for online problems.

Detailed plan:

  1. Ski rental. Basic concepts.
  2. Scheduling. Charging schemes.
  3. Buffer management. Collecting items from a dynamic queue. Open problems.
  4. Caching. Bijective analysis.
  5. The k-server problem. The work function algorithm.
  6. The primal dual framework for online algorithms.

References:

  • The Design of Competitive Online Algorithms Via a Primal-Dual Approach by Naor and Buchbinder.

Lecture 2: Introduction to Stochastic Programming

Lecturer: Nguyễn Việt Hưng.

Abstract: The aim of the course is to introduce the main notions of stochastic programming: multistage stochastic programming, chance constrained programming, stochastic integer programming, decomposition, primal method, dual method.

Detailed plan:

  1. Introduction to Stochastic Programming
  2. Duality and Optimality
  3. Decomposition methods
  4. Probabilistic Constraint Programming
  5. Integer Stochastic Programming
  6. Applications of Stochastic Programming

References:

  • John Birge and François Louveaux, Introduction to Stochastic Programming. Springer Series in Operations Research and Financial Engineering, 2011.
  • Alexander Shapiro, Darinka Dentcheva and Andrzej Ruszczynski. Lectures on Stochastic Programming: Model and Theory. MOS-SIAM Series on Optimization, 2009.

Lecture 3: Online Learning/Online Convex Optimization.

Lecturer: Nguyễn Kim Thắng and Lê Hải Yến

Abstract: The aim of the course is to introduce modern tools to design optimization algorithms in online learning that are robust in dynamically evolving settings. We will cover fundamental concepts and present challenges in order to understand the mathematics of machine learning, in particular deep learning.

Detailed plan:

  1. Introduction. Gentle start: Learning from Experts.
  2. Gradient Descent/Stochastic Gradient Descent. Upper and lower bounds. Applications: support vector machine (SVM) training.
  3. Mirror Descent. Regularization. Applications.
  4. Bandit Convex Optimization.
  5. Projection-free Algorithms. Second order methods.
  6. Learning Theory and Applications.

References:

  • Elad Hazan, Introduction to Online Convex Optimization. Foundations and Trends in Optimization, 2016.
  • Sébastien Bubeck, Convex Optimization: Algorithms and Complexity. Foundations and Trends in Machine Learning, 2015.