SOC Reading Group
SOC Reading Group
We meet regularly to discuss classical and the latest papers in optimization and control. This opportunity is open to UC San Diego students. Feel free to join if you're interested! To foster meaningful discussions, we kindly encourage participants to read the assigned papers before the group session.
For winter 2024, the reading group takes place on Wednesdays from 2:30 pm to 4:00 pm at FAH 3009.
See a full schedule here.
Upcoming Reading Group
Past Reading Group
Winter 2024
[2024.04.24]
Complementarity and nondegeneracy in semidefinite programming
Speaker: Feng-Yi Liao [Slides]
Supplemental Material:
[2024.04.10]
Convergence of the Augmented Lagrangian Method
Speaker: Pranav Reddy [Slides]
Supplemental Material:
[2024.04.03]
Robustness of Model Predictive Control
Speaker: Xu Shang [Slides]
Supplemental Material:
[2024.03.20]
Convex duality in linear quadratic problems
[2024.03.13]
On the tightness of SDP relaxations of QCQPs
[2024.03.06]
Introduction to the Augmented Lagrangian Method
Speaker: Pranav Reddy [Slides]
Supplemental Material:
[2024.02.28]
A nearly linearly convergent gradient method for “typical” nonsmooth functions (Invited talk by Liwei Jiang)
Speaker: Liwei Jiang
Abstract: Nonsmooth optimization problems appear throughout machine learning and signal processing. Standard gradient methods in nonsmooth optimization are often described as “slow” since the well-known “lower-complexity bounds” suggest they converge at best sublinearly. In this talk, I will introduce a gradient method that (locally) exponentially improves upon such lower bounds. The method is “parameter-free” and converges nearly linearly on “typical” optimization problems. The key insight is that “typical” nonsmooth functions are not pathological but are instead “partially smooth” in algorithmically useful ways.
Bio: Liwei Jiang is a fifth-year Ph.D. candidate in the School of Operations Research and Information Engineering (ORIE) at Cornell University. Previously, he obtained B.S. in Mathematics and Statistics from Nanjing University in 2019 and M.S. in Operations Research from Cornell in 2022. His fields of interest are optimization, data science, and machine learning. His research focuses on designing, analyzing, and accelerating optimization algorithms for modern estimation and learning problems. Liwei is a recipient of the Hsien Wu and Daisy Yen Wu Scholarship and a two-time recipient of Cornell ORIE's Teaching Assistant of the Year award.
[2024.02.21]
Cancelled, ITA workshop week
[2024.02.14]
Online Policy Optimization in Nonlinear Time-Varying Systems (Invited talk by Yiheng Lin )
Speaker: Yiheng Lin
Abstract: In this talk, I will introduce our work on online policy optimization under time-varying dynamics and costs with possibly unknown dynamical models. We study a setting where the online agent seeks to minimize the total cost incurred over a finite horizon by optimizing the parameters for a given policy class. We propose the Gradient-based Adaptive Policy Selection (GAPS) algorithm that achieves the optimal policy regret and is efficient to implement. The key component of our theoretical analysis is establishing the connections between GAPS for online policy optimization and online gradient descent (OGD) for classic online optimization problem, which allow us to ‘transfer’ existing regret guarantees for OGD to GAPS. Further, I will present a meta-framework that can combine an online policy optimization algorithm like GAPS with an online model estimator to address the challenge of unknown nonlinear dynamical models. Compared with many prior works that study online control in unknown linear dynamical systems, our work provides a critical insight that learning the true dynamical model globally is unnecessary. Instead, the online model estimator only needs to predict well on the actual trajectory visited by the controller, which is a tractable goal for general nonlinear dynamical systems.
Bio: Yiheng Lin is a fourth-year Ph.D. candidate in the Department of Computing and Mathematical Sciences at California Institute of Technology. He is co-advised by Prof. Adam Wierman and Prof. Yisong Yue. Yiheng was named an Amazon/Caltech AI4Science Fellow in 2023, a PIMCO Graduate Fellow in Data Science in 2022, and a Kortschak Scholar in 2020. His research interests include online learning, control, and reinforcement learning.
[2024.02.07]
Cancelled
Speaker:
Supplemental Material:
TBD
[2024.01.31]
Learning Koopman Eigenfunctions and Invariant Subspaces from Data: Symmetric Subspace Decomposition
Discussion Leader: Xu Shang, [Slides]
Supplemental Material:
[2024.01.24]
Cancelled
Discussion Leader: Xu Shang,
Supplemental Material:
TBD
[2024.01.17]
Semidefinite Programming Duality and Linear Time-invariant Systems
Discussion Leader: Rich Pai, [Slides]
Supplemental Material:
Fall 2023
[2023.12.06]
Model-based optimization
Discussion Leader: Feng-Yi Liao, [Slides]
Supplemental Material:
[2023.11.22]
Applications of Performance Estimation
Discussion Leader: Pranav Reddy, [Slides]
Supplemental Material:
[2023.11.15]
Modeling Nonlinear Control Systems via Koopman Control Family
Discussion Leader: Xu Shang, [Slides]
Supplemental Material:
[2023.11.08]
Nonlinear Control
Discussion Leader: Hesam Mojtahedi, [Slides]
Supplemental Material:
[2023.11.01]
Distributionally Robust Linear Quadratic Control
Discussion Leader: Rich Pai, [Slides]
Supplemental Material:
[2023.10.04]
Introduction to Convex Interpolation
Discussion Leader: Pranav Reddy, [Slides]
Supplemental Material:
[2023.09.27]
Koopman Operator Theorem II
Discussion Leader: Xu Shang, [Slides]
Supplemental Material:
[2023.09.21]
An introduction to optimization on smooth manifolds II
Discussion Leader: Hesam Mojtahedi, [Slides]
Supplemental Material:
[2023.09.14]
The effect of smooth parameterizations on nonconvex optimization landscapes
Discussion Leader: Rich Pai, [Slides]
Supplemental Material:
[2023.09.07]
Lower complexity and model based optimization
Discussion Leader: Feng-Yi Liao, [Slides]
Supplemental Material:
[2023.08.31]
Koopman Operator Theorem I
[2023.08.25]
An introduction to optimization on smooth manifolds I
Discussion Leader: Hesam Mojtahedi, [Slides]
Supplemental Material:
[2023.08.17]
Nonsmooth Nonconvex Optimization: INGD an deterministic algorithms
Discussion Leader: Rich Pai, [Slides]
Supplemental Material: