We meet regularly to discuss classical and the latest papers in optimization and control. This opportunity is open to UC San Diego students. Feel free to join if you're interested! To foster meaningful discussions, we kindly encourage participants to read the assigned papers before the group session.
For winter 2025, the reading group takes place on Wednesdays from 4:00 pm to 5:30 pm at FAH 3009.
See a full schedule here.
[2025.07.16]
First-order Methods Almost Always Avoid Saddle Points
Speaker: Feng-Yi Liao [Slides]
Supplemental Material:
[2025.07.30]
TBD
Spring 2025
[2025.07.02]
Introduction to Kalman filter from optimal filtering perspective
[2025.06.18]
Finding stationary points in nonsmooth nonconvex optimization
Speaker: Yuto Watanabe [Slides]
Supplemental Material:
[2025.06.04]
Some difficulties and advances in nonsmooth optimization
Speaker: Feng-Yi Liao [Slides]
Supplemental Material:
[2025.05.21]
Steepest Descent Method with Random Step Lengths
Speaker: Pranav Reddy [Slides]
Supplemental Material:
[2025.05.07]
High Dimensional Probability and its applications in System Identification
Speaker: Jiachen Qian [Slides]
Supplemental Material:
[2025.04.16]
Data Dimension Reduction and Manifold Learning
[2025.04.02]
Stationarity in Nonsmooth Optimization
Speaker: Yuto Watanabe [Slides]
Supplemental Material:
Winter 2025
[2025.03.19]
The power of predictions in online control
[2025.02.26]
Algebraic characterization of equivalence between optimization algorithms
Speaker: Feng-Yi Liao [Slides]
Supplemental Material:
[2025.02.12]
On the local stability of semidefinite relaxations
Speaker: Pranav Reddy [Slides]
Supplemental Material:
[2025.01.15]
Learning for prediction from the perspective of Low rank approximation
Speaker: Jiachen Qian [Slides]
Supplemental Material:
[2024.01.08]
Preliminary exam - practice
Speaker: Xu Shang [Slides]
Supplemental Material:
Fall 2024
[2024.12.18]
Linear-quadratic regulator and H∞ analysis problems via covariance representations and duality
Speaker: Yuto Watanabe [Slides]
Supplemental Material:
[2024.12.11]
Non-stationary Online Learning and Non-stochastic Control
Speaker: Rich Pai [Slides]
Supplemental Material:
[2024.12.04]
Some connections on Mirror Descent, Frank-Wolf, and Bundle Methods
Speaker: Feng-Yi Liao [Slides]
Supplemental Material:
[2024.11.27]
Two Algorithms for Smooth Distributed Convex Optimization
[2024.11.20]
Introduction to Disturbance Attenuation Problem
Speaker: Xu Shang [Slides]
Supplemental Material:
[2024.11.06]
Bi-level optimization with implicit gradient methods
Speaker: Chendi Qv [Slides]
Supplemental Material:
[2024.10.30]
Regret Analysis in Estimation
Speaker: Jiachen Qian [Slides]
Supplemental Material:
[2024.10.23]
Policy optimization for H∞ control of discrete-time LTI systems
[2024.10.16]
Non-stationary Online Learning and Non-stochastic Control
Speaker: Rich Pai [Slides]
Supplemental Material:
[2024.10.09]
A simple nearly optimal restart scheme for speeding up first-order methods
Speaker: Feng-Yi Liao [Slides]
Supplemental Material:
Summer 2024
[2024.09.25]
Introduction to Distributed Optimization
[2024.08.28]
Universal gradient methods for convex optimization problems
Speaker: Feng-Yi Liao [Slides]
Supplemental Material:
[2024.08.21]
Stability Analysis of Data-enabled Predictive Control via Koopman Operator
Speaker: Xu Shang[Slides]
Supplemental Material:
Winter 2024
[2024.05.03]
Policy Optimization of Zero-Sum Linear Quadratic Games, Stability and Convergence of LQ RAL
[2024.04.24]
Complementarity and nondegeneracy in semidefinite programming
Speaker: Feng-Yi Liao [Slides]
Supplemental Material:
[2024.04.10]
Convergence of the Augmented Lagrangian Method
Speaker: Pranav Reddy [Slides]
Supplemental Material:
[2024.04.03]
Robustness of Model Predictive Control
Speaker: Xu Shang [Slides]
Supplemental Material:
[2024.03.20]
Convex duality in linear quadratic problems
[2024.03.13]
On the tightness of SDP relaxations of QCQPs
[2024.03.06]
Introduction to the Augmented Lagrangian Method
Speaker: Pranav Reddy [Slides]
Supplemental Material:
[2024.02.28]
A nearly linearly convergent gradient method for “typical” nonsmooth functions (Invited talk by Liwei Jiang)
Speaker: Liwei Jiang
Abstract: Nonsmooth optimization problems appear throughout machine learning and signal processing. Standard gradient methods in nonsmooth optimization are often described as “slow” since the well-known “lower-complexity bounds” suggest they converge at best sublinearly. In this talk, I will introduce a gradient method that (locally) exponentially improves upon such lower bounds. The method is “parameter-free” and converges nearly linearly on “typical” optimization problems. The key insight is that “typical” nonsmooth functions are not pathological but are instead “partially smooth” in algorithmically useful ways.
Bio: Liwei Jiang is a fifth-year Ph.D. candidate in the School of Operations Research and Information Engineering (ORIE) at Cornell University. Previously, he obtained B.S. in Mathematics and Statistics from Nanjing University in 2019 and M.S. in Operations Research from Cornell in 2022. His fields of interest are optimization, data science, and machine learning. His research focuses on designing, analyzing, and accelerating optimization algorithms for modern estimation and learning problems. Liwei is a recipient of the Hsien Wu and Daisy Yen Wu Scholarship and a two-time recipient of Cornell ORIE's Teaching Assistant of the Year award.
[2024.02.21]
Cancelled, ITA workshop week
[2024.02.14]
Online Policy Optimization in Nonlinear Time-Varying Systems (Invited talk by Yiheng Lin )
Speaker: Yiheng Lin
Abstract: In this talk, I will introduce our work on online policy optimization under time-varying dynamics and costs with possibly unknown dynamical models. We study a setting where the online agent seeks to minimize the total cost incurred over a finite horizon by optimizing the parameters for a given policy class. We propose the Gradient-based Adaptive Policy Selection (GAPS) algorithm that achieves the optimal policy regret and is efficient to implement. The key component of our theoretical analysis is establishing the connections between GAPS for online policy optimization and online gradient descent (OGD) for classic online optimization problem, which allow us to ‘transfer’ existing regret guarantees for OGD to GAPS. Further, I will present a meta-framework that can combine an online policy optimization algorithm like GAPS with an online model estimator to address the challenge of unknown nonlinear dynamical models. Compared with many prior works that study online control in unknown linear dynamical systems, our work provides a critical insight that learning the true dynamical model globally is unnecessary. Instead, the online model estimator only needs to predict well on the actual trajectory visited by the controller, which is a tractable goal for general nonlinear dynamical systems.
Bio: Yiheng Lin is a fourth-year Ph.D. candidate in the Department of Computing and Mathematical Sciences at California Institute of Technology. He is co-advised by Prof. Adam Wierman and Prof. Yisong Yue. Yiheng was named an Amazon/Caltech AI4Science Fellow in 2023, a PIMCO Graduate Fellow in Data Science in 2022, and a Kortschak Scholar in 2020. His research interests include online learning, control, and reinforcement learning.
[2024.02.07]
Cancelled
Speaker:
Supplemental Material:
TBD
[2024.01.31]
Learning Koopman Eigenfunctions and Invariant Subspaces from Data: Symmetric Subspace Decomposition
Discussion Leader: Xu Shang, [Slides]
Supplemental Material:
[2024.01.24]
Cancelled
Discussion Leader: Xu Shang,
Supplemental Material:
TBD
[2024.01.17]
Semidefinite Programming Duality and Linear Time-invariant Systems
Discussion Leader: Rich Pai, [Slides]
Supplemental Material:
Fall 2023
[2023.12.06]
Model-based optimization
Discussion Leader: Feng-Yi Liao, [Slides]
Supplemental Material:
[2023.11.22]
Applications of Performance Estimation
Discussion Leader: Pranav Reddy, [Slides]
Supplemental Material:
[2023.11.15]
Modeling Nonlinear Control Systems via Koopman Control Family
Discussion Leader: Xu Shang, [Slides]
Supplemental Material:
[2023.11.08]
Nonlinear Control
Discussion Leader: Hesam Mojtahedi, [Slides]
Supplemental Material:
[2023.11.01]
Distributionally Robust Linear Quadratic Control
Discussion Leader: Rich Pai, [Slides]
Supplemental Material:
[2023.10.04]
Introduction to Convex Interpolation
Discussion Leader: Pranav Reddy, [Slides]
Supplemental Material:
[2023.09.27]
Koopman Operator Theorem II
Discussion Leader: Xu Shang, [Slides]
Supplemental Material:
[2023.09.21]
An introduction to optimization on smooth manifolds II
Discussion Leader: Hesam Mojtahedi, [Slides]
Supplemental Material:
[2023.09.14]
The effect of smooth parameterizations on nonconvex optimization landscapes
Discussion Leader: Rich Pai, [Slides]
Supplemental Material:
[2023.09.07]
Lower complexity and model based optimization
Discussion Leader: Feng-Yi Liao, [Slides]
Supplemental Material:
[2023.08.31]
Koopman Operator Theorem I
[2023.08.25]
An introduction to optimization on smooth manifolds I
Discussion Leader: Hesam Mojtahedi, [Slides]
Supplemental Material:
[2023.08.17]
Nonsmooth Nonconvex Optimization: INGD an deterministic algorithms
Discussion Leader: Rich Pai, [Slides]
Supplemental Material: