TBD
We started Friends of Optimization in December 2024. These links below also contain some past talks that we organized.
Negative Stepsizes Make Gradient-Descent-Ascent Converge by Henry Shugart, Aug 29 (virtual) [Video link]
Provable Non-Accelerations of the Heavy-Ball Method by Aymeric Dieuleveut, March 10, 2025 at Columbia (in person) [Video link]
Algebraic Methods in Convex Optimization by Kevin Shu, February 13, 2025 at UCLA (in person) [Video link]
How to Make the Gradient Descent-Ascent Converge to Local Minima by Donghwan Kim, January 08, 2025 at UCLA (in person) [Video link]
Schedules & Schedule-Free Learning by Aaron Defazio, December 6, 2024 at UCLA (in person) [Video link]
Acceleration by Stepsize Hedging by Jason Altschuler, December 4, 2023 at MIT (in person) [Video link]
Provably Faster Gradient Descent via Long Steps by Ben Grimmer, August 11, 2023, MIT (virtual) [Video link]
Parameter-Free Adaptive Methods for Deep Learning by Konstantin Mishchenko, June 26, 2023, MIT (virtual) [Video link]