Reading Group on Iterative Methods
About the reading group
The purpose of this reading group is to study recent developments on the complexity of iterative methods for optimization and related problems in machine learning. We are interested in advances achieved by Performance Estimation (PEP), discretization of continuous dynamics, among others.
We have weekly online meetings. If you are interested in participating, please send an e-mail to Cristóbal Guzmán.
Participants
Santiago Armstrong (PUC)
Roberto Cominetti (UAI)
Juan Pablo Contreras (UAI)
Juan Pablo Flores (PUC)
Cristóbal Guzmán (PUC)
Carlos Sing-Long (PUC)
Carlos Riveros (USACH)
Benjamín Rubio (PUC)
Patricio Ulloa (PUC)
Fall Schedule (March-July 2020)
03-26: C. Guzmán. Uniform Stability of Stochastic Gradient Descent for Nonsmooth Losses (Bassily, Feldman, Guzmán, Talwar). video
04-02: P. Ulloa. Smooth Strongly Convex Interpolation and Exact Worst-case Performance of First-order Methods (Taylor, Hendrickx & Glineur) slides
04-09: J.P. Contreras. Accelerated Proximal Point Method for Maximally Monotone Operators (Kim)
04-16: S. Armstrong. A Descent Lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications (Bauschke, Bolte, Teboulle). slides
04-22. Guest Speaker: Adrien Taylor
05-08: J.P. Flores. Optimal Complexity of Bregman First-Order Methods (Dragomir, Taylor, d'Aspremont, Bolte). slides
06-18: R. Cominetti. Rates of Convergence for the Krasnoselskii-Mann Iteration. slides
06-25 C. Guzmán. Operator Splitting Performance Estimation: Tight Contraction Factors... (Ryu, Taylor, Bergeling, Gisselson). slides
07-02: P. Ulloa: Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions (Bach & Taylor). slides
07-16: C. Sing-Long. Inverse problems in imaging and Union of Subspaces (Gilton, Ongie & Willet)
07-23: S. Armstrong: Combinatorial Algorithms for Seriation Problems
Spring Schedule (August-December 2020)
08-13 J.P. Flores: Constructions of Uniformly Convex Functions (Borwein et al.) slides
08-20 A. Valdés: Superfast Second Order Methods (Nesterov)
08-27 B. Rubio: Higher Order Proximal Point Methods (Nesterov) (slides, notas)
09-03 J.P. Contreras: Constructions of Uniformly Convex Functions, part 2 (Borwein et al.)
09-10 C. Guzmán: Rényi Differential Privacy (Mironov) (slides)
09-24 T. González: Sharper Bounds for Uniformly Stable Algorithms (Bousquet, Klochkov & Zhivotovskiy) (slides)
10-01 P. Ulloa: Stochastic Convex Optimization with Heavy Tails (Davis, Drusvyatskiy, Xiao & Zhang)
10-08 C. Riveros: Nonasymptotic Bounds for Langevin Monte Carlo
10-15 J.P. Contreras: Accelerated Bregman methods for Relatively Smooth Convex Optimization (Hanzely, Richtarik & Xiao) (slides)
11-19 C. Guzmán: Open Problem: The Sample Complexity of p-norm DP-SCO
Papers for 2021
Acceleration in convex optimization
Affine-Invariant Contracting Point Methods for Convex Optimization. Doikov & Nesterov
Stochastic convex optimization and equilibrium
Tight Analyses for Nonsmooth Stochastic Gradient Descent (Harvey, Liaw, Plan & Randhawa)
Making the Last Iterate of SGD Information-Theoretically Optimal (Jain, Nagaraj & Netrapalli)
Limits on Gradient Compression for Stochastic Optimization (Mayekar & Tyagi)
Simple and Optimal Methods for Stochastic Variational Inequalities, I: Operator Extrapolation (Kotsalis, Lan & Li)
Finite-Sample Analysis of Contractive Stochastic Approximation Using Smooth Convex Envelopes (Chen, Maguluri, Shakkotai & Shanmugam)
Non-Euclidean Proximal Methods
A Characterization of Bregman Firmly Nonexpansive Operators Using a New Monotonicity Concept (Borwein, Reich, Sabach)
Bregman Monotone Optimization Algorithms (Bauschke, Borwein & Combettes)
Algorithmic Stability and Generalization
A Tight Lower Bound for Uniformly Stable Algorithms. Liu & Lu
SGD Generalizes Better than GD (and Regularization Doesn't Help). Amir, Koren & Livni
Learning while Dissipating Information: Understanding the Generalizarion Ability of SGLD. Wang, Huang, Gao & Calmon
Train Simultaneously, Generalize Better: Stability of Gradient-Based Minimax Learners. Farnia & Ozdaglar
Sampling
Logsmooth Gradient Concentration and Tighter Bounds for Metropolized Hamiltonian Montecarlo, and Structured Logconcave Sampling with a Restricted Gaussian Oracle. Lee, Shen & Tian
Reinforcement Learning
Stochastic Approximation and Q-Learning (Qu & Wierman)
Simple and Optimal Methods for Stochastic Variational Inequalities, II: Markovian noise and policy evaluation in Reinforcement Learning (Kotsalis, Lan & Li)
Learning Theory
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks. Malach, Yehudai, Shalev-Shwartz & Shamir
Feature Purification: How Adversarial Training Performs Robust Deep Learning. Allen-Zhu & Li
Proper Learning, Helly Number, and an Optimal SVM Bound. Bousquet, Hanneke, Moran, Zhivotovskiy (best paper COLT 2020)
Open Problem: Fast and Optimal Online Portfolio Selection (van Erven, der Hoeven, Kotłowski & Koolen)
Differential Privacy
Learning with User-Level Privacy. Levy et al.
Privately Learning High-Dimensional Distributions (Kamath, Li, Singhal & Ullman)
Fingerprinting Codes and the Prive of Approximate Differential Privacy (Bun, Ullman & Vadhan)
The Cost of Privay (Cai, Wang, Zhang)