Spring semester 2025 at the University of Wrocław.
The lecture and the exercise session is led by Martin Böhm.
The practical/lab sessions are given by Filip Chudy.
Lecture 1: Main topic of the course: constrained and unconstrained optimization problems. A general overview of iterative methods. Program hierarchies (LP, QP, SOCP, SDP). Historical examples: Gauss and Ceres, linear optimization for military rationing, interior point methods as part of the best paper at FOCS 2022. Examples of hard problems: NP-hard problems and uncomputable functions as optimization problems. Handout.
Lecture 2: A quick overview of modeling with Linear Programming and Integer Linear Programming. Handout.
Lecture 3: Convexity of sets and functions. Definition, basic properties. Hyperplane separation lemma and hyperplane supporting lemma. The first-order and second-order convexity criterion. Observation that for a convex function, a local minimum is a global minimum. Handout.
Lecture 4: Line search as a general tool for unconstrained minimization. The Wolfe conditions (Armijo, curvature). Proof of convergence for any line search in finite number of steps assuming Lipschitz-continuity and Wolfe conditions. [NW, Chapter 3]. Handout not yet available, homework tasks same as last year, i.e., in this document.
Lecture 5: Strong convexity as a property of convex functions that allows for good analysis for gradient descent. Strong convexity implies bounds on both the smallest and the largest eigenvalue of the Hessian. Proof of linear convergence rate for Gradient Descent under strong convexity assumptions. Condition number. Role of condition number on the convergence rate of GD. [BV, 9.1 - 9.3] Handout (draft version, report any inaccuracies). No exercises were presented that day, so last week's exercises remain open.
Lecture 6: Newton method. Historical context, Newton method for finding a root of a function of one variable. Definition of the Newton method for unconstrained optimization. Proof of affine invariance of the Newton method. Damped and pure Newton methods. Definition of operator norms. Convergence estimates in the case where a function is strongly convex and the Hessian is Lipschitz-continuous. [BV, Chapter 9.5]. Handout (including exercises).
Lecture 7: Conjugate gradient method. Conjugate direction method as an iterative solver of a linear system of equations. Theorem and proof that conjugate direction converges in n iterations. Conjugate gradient method for linear systems. Theorem and proof of the basic properties of CGM. A practical version of CGM. [NW, Chapter 5]. Handout (draft version, report any inaccuracies), exercises here.
Lecture 8: Quasi-Newton methods. Quadratic model for the function f. , the quasi-Hessian B_k and quasi-Hessian-inverse H_k. Requirements on B_k/H_k. Derivation of the BFGS method from the requirement on H_k. Advantages and disadvantages of Quasi-Newton methods [NW, Chapter 6]. Handout (draft version, report any inaccuracies). Exercises from last week remain open.
Lecture 9: Introduction to constrained optimization. Why gradient descent fails on constrained problems. Introduction by example to necessary constraints in the interior and boundary. Formal statement of the KKT conditions theorem [NW, Chapter 12]. Handout.
Lecture 10: Constrained optimization, part 2. The importance of constraint qualification in the KKT conditions. Definition of the tangent cone and the set of the linearized feasible directions. Common constraint qualifications: LICQ, MFCQ, Slater's condition [NW, Chapter 12]. Handout.
Lecture 11: Duality. Definition of the Lagrange dual function and the dual problem. Weak duality and its proof. Duality gap, strong duality. For a convex problem, if KKT conditions hold, then strong duality holds (with proof). [BV, Chapter 5].
Lecture 12: Duality, Part 2. Strong duality and sufficient conditions of strong duality. Slater's condition and its weaker form. Solving a primal by using KKT conditions: the water filling problem [BV 5.5]. Solving a constrained problem by solving the unconstrained dual: the equality constrained analytic center problem [BV 10.2]. Handout with exercises.
Lecture 13: Interior-point methods for linear programming. Introduction to the primal-dual interior point method -- primal + dual LP formulation, KKT complementarity conditions for the dual and their relaxation. Duality measure. Newton method for solving a system F(x) = 0 for a system of functions F. The Jacobian matrix. The general interior point method framework. [NW, Chapter 14]. Please note: Exercises from Lecture 12 are still open.
Lecture 14: Interior-point methods, part 2. Central path, feasible and strictly feasible region. The N_{-infty} neighborhood of the central path. The long-step path following framework. Proof of Lemmas 14.1 and 14.2 in the [NW] book. Main idea of the proof of Theorem 14.3. [NW, Chapter 14]. Handout (draft version, report any inaccuracies).
Lecture 15: Review of the course, focusing especially on Lectures 5, 7 and 8.
This semester, exercises can be found in the handouts.
There will be 2 mandatory homework sheets. Expect them in the 33% and 66% of the course.
Mandatory homework 1. Deadline: May 21, 10:15.
Mandatory homework 2. Deadline: June 18, 23:59.
Exercise session passing criteria:
At least 4 exercises presented in front of the class, each in a different session.
At least 50% of overall points from the mandatory homework sheets.
At least 50% of points from the midterm test.
All extra exercises presented in front of the class which are not counted as part of 1. will be converted into extra points for the mandatory tasks.
Attendance at the exercise session each week is not mandatory, aside from the requirements above.
For tasks for Lab sessions, please visit the Google Drive managed by Filip Chudy.
[NW] Jorge Nocedal, Stephen J. Wright. Numerical Optimization. Second Edition, Springer 2006. The book is available as a PDF at this link (likely unofficially).
[BV] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press 2004. Available online at https://web.stanford.edu/~boyd/cvxbook/ .
[MG] Jiří Matoušek, Bernd Gärtner. Understanding and Using Linear Programming. Springer 2007 (https://link.springer.com/book/10.1007/978-3-540-30717-4). The book is also available as a PDF under this link (likely unofficially).
[Heb] Waldemar Hebisch. Lecture notes for the course Numerical Optimization, 2021. Available on the Internet Archive at this link.