Two-person games: zero-sum and non-zero-sum games, cooperative and non-cooperative games
N-person games
Preference orders and group decision
Utility theory
Decision trees, deterministic/probabilistic dynamic programming and Markov decision processes
Tutorial: an introduction to proximal splitting methods
This is a short course consisting of two lectures. More information about the course can be found following this link (in Spanish).
Location: CMM - University of Chile.
Abstract
Large-scale problems usually have a favorable structure, and exploiting this structure is key to solving them efficiently. Decomposition methods in optimization break complex problems into smaller, more manageable pieces to use the properties of the underlying components separately. In this tutorial, we focus on a special type of decomposition methods called of proximal splitting. We start reviewing the cornerstone of several modern optimization methods, the gradient descent and the proximal point algorithm. Then, we discuss the proximal-gradient method as well as the Alternating Direction Method of Multipliers in the context of one of the most common statistical supervised learning tasks, the regularized least squares problem. We analyze convex and nonconvex versions of this problem and compare benefits and drawbacks of different formulations. This tutorial has a theoretical and a practical component, as we perform numerical experiments in Python. If time permits, we also discuss other prominent instances of proximal splitting methods, namely, the Douglas-Rachford and the Chambolle-Pock methods.Â