Lecture 1: Competitive analysis of online algorithms.
Lecturer: Christoph Dürr
Abstract: The online computation paradigm applies to situations where the input of a computational problem is provided in form of a request sequence to the algorithm. The algorithm has to serve each request with some action that will generate a cost depending on the current configuration. Classical examples include the ski-rental problem, the caching problem, the k-server problem and the TCP acknowledgement problem, as well as online versions of the classical combinatorial optimization problems. Interesting variants are the secretary problem and the cow path problem. The performance of an online algorithm is typically measured with the so-called competitive ratio. It compares the performance of the algorithm to the optimal offline solution, and measures the price of not knowing the future requests in advance. In this course, we will cover the algorithmic ideas that are useful for these problem, as well as adversarial constructions to show impossibility results. The ultimate goal of the course is to understand the primal dual framework for online problems.
- Ski rental. Basic concepts.
- Scheduling. Charging schemes.
- Buffer management. Collecting items from a dynamic queue. Open problems.
- Caching. Bijective analysis.
- The k-server problem. The work function algorithm.
- The primal dual framework for online algorithms.
- The Design of Competitive Online Algorithms Via a Primal-Dual Approach by Naor and Buchbinder.
Lecture 2: Introduction to Stochastic Programming
Lecturer: Nguyễn Việt Hưng.
Abstract: The aim of the course is to introduce the main notions of stochastic programming: multistage stochastic programming, chance constrained programming, stochastic integer programming, decomposition, primal method, dual method.
- Introduction to Stochastic Programming
- Duality and Optimality
- Decomposition methods
- Probabilistic Constraint Programming
- Integer Stochastic Programming
- Applications of Stochastic Programming
- John Birge and François Louveaux, Introduction to Stochastic Programming. Springer Series in Operations Research and Financial Engineering, 2011.
- Alexander Shapiro, Darinka Dentcheva and Andrzej Ruszczynski. Lectures on Stochastic Programming: Model and Theory. MOS-SIAM Series on Optimization, 2009.
Lecture 3: Online Learning/Online Convex Optimization.
Lecturer: Nguyễn Kim Thắng and Lê Hải Yến
Abstract: The aim of the course is to introduce modern tools to design optimization algorithms in online learning that are robust in dynamically evolving settings. We will cover fundamental concepts and present challenges in order to understand the mathematics of machine learning, in particular deep learning.
- Introduction. Gentle start: Learning from Experts.
- Gradient Descent/Stochastic Gradient Descent. Upper and lower bounds. Applications: support vector machine (SVM) training.
- Mirror Descent. Regularization. Applications.
- Bandit Convex Optimization.
- Projection-free Algorithms. Second order methods.
- Learning Theory and Applications.
- Elad Hazan, Introduction to Online Convex Optimization. Foundations and Trends in Optimization, 2016.
- Sébastien Bubeck, Convex Optimization: Algorithms and Complexity. Foundations and Trends in Machine Learning, 2015.