Objectives
This course provides a rigorous foundation in convex optimization theory and its applications, with a balance of mathematical principles and algorithmic techniques. Students begin by revisiting core ideas from analysis and convexity, which serve as the basis for studying unconstrained, constrained, and nonsmooth optimization problems. The course then explores duality theory and primal–dual approaches, highlighting their role in both theoretical analysis and computational methods. Practical experience is emphasized through programming assignments using MATLAB and/or Python, enabling students to simulate and analyze convex optimization problems. Applications are woven throughout, with examples drawn from communications, signal processing, and machine learning, such as resource allocation, signal recovery, and data-driven modeling. By the end of the course, students will be able to understand, apply, and critically assess optimization techniques in advanced research contexts.
Prerequisite: basic knowledge of calculus, linear algebra, and programming.
Final Exam: Oral exam.
Classroom: https://classroom.google.com/c/MjM0Njk1MjY3Mzha?cjc=5gerx6sd
Lessons: 12 hours, corresponding to 2 CFU
Dates: The course will be held in the "Aula Lettura" of the DIET Dept., according to the following schedule:
15-09-25 , 9.30 - 12.00
16-09-25 , 9.30 - 12.00
17-09-25, 9.30 - 12.00
18-09-25, 9.30 - 12.00
Contents
Basics of mathematical analysis: Differentiable functions, stationary points.
Convex functions: Methods to assess convexity, operations that preserve convexity.
Unconstrained minimization: Descent algorithms, Gradient method, Newton method, stochastic gradient method.
Nondifferentiable optimization: Subgradients, the subgradient method.
Convex optimization: Convex sets, examples of convex problems, the minimum principle.
Constrained and regularized minimization: Projections, Gradient-projection method, Proximal operator, Proximal gradient method, sparsity-inducing minimization.
Duality theory: Lagrangian, Lagrange Dual problem, Weak and strong duality, KKT conditions.
Primal-dual optimization methods: Dual ascent, Method of Multipliers, Alternating Direction Method of Multipliers.
Applications: Rate maximization in multi-carrier communications, beamforming, maximum likelihood estimation, smoothing of signals, compressive sensing, support vector machines, matrix completion.
Textbooks and resources:
[1] Slides and codes
[2] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004;
[3] S. Boyd et al., Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers, Foundations and Trends in Machine Learning, 3(1):1–122, 2011.
[4] CVX software for convex optimization.