KAUST - Fall semester

Organizers: Filip Hanzely, Aritra Dutta and Peter Richtarik.
KAUST, Building 1, room 2107 
Sundays, 12:15 - 13:45 (lunch provided)



 Date  Speaker              Paper
13.2.2018  Nicolas Loizou     Random inexact projection methods
 20.2.2018    
Filip Hanzely  
The Implicit Bias of Gradient Descent on Separable Data
27.2.2018 El Houcine Bergou    Random direct search method for unconstrained minimization.
4.3.2018 Samuel Horvath Fast Incremental Method for Nonconvex Optimization
11.3.2018 Alibek Sailanbayev SIGNSGD: compressed optimisation for non-convex problems
18.3.2018 Konstantin Mishchenko Penalty reformulation for constrained optimization
25.3.2018 Adel Bibi Analytic Expressions for Probabilistic Moments of PL-DNN with Gaussian Input
8.4.2018 Konstantin Mishchenko A Simple Practical Accelerated Method for Finite Sums
15.4.2018 Filip Hanzely

On the Convergence of Adam and Beyond

29.4.2018 Samuel Horvath

Second order Stochastic Optimization for Machine 

Learning in Linear Time

6.5.2018 Matthias Mueller

Optimization for Deep Learning 

 
 





 Date  Speaker              Paper
22.8.2017  Aritra Dutta      A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices (Dutta, Li and Richtárik - 7/2017)
 29.8.2017  Filip Hanzely  Relatively-Smooth Convex Optimization by First-Order Methods, and Applications (Lu, Freund and Nesterov - 10/2016)
 12.9.2017  Filip  Hanzely      Randomized methods for relative smooth optimization
 19.9.2017  Aritra Dutta  Self-Occlusion and Disocclusion in Causal Video Object Segmentation
 26.9.2017  Konstantin  Mishchenko An Asynchronous Distributed Prox-Grad Algorithm
 3.10.2017  Konstantin  Mishchenko  An Asynchronous Distributed Prox-Grad Algorithm
 10.10.2017 Alibek Sailanbayev Breaking Locality Accelerates Block Gauss-Seidel
 17.10.2017 Sebastian Stich  Approximate steepest coordinate descent
 24.10.2017  Viktor Lukáček  Dykstras algorithm with bregman projections: A convergence proof
 31.10.2017 Nikita Doykov    Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
Cubic regularization of Newton method and its global performance  
 7.11.2017  Konstantin Mischenko  Proximal-Proximal-Gradient Method
 14.11.2017 Nicolas Loizou  First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization
 21.11.2017  Robert Gower SAGA is a variant of stochastic gradient: new view and new proof
 28.11.2017  Filip  Hanzely     " Relative-Continuity" for Non-Lipschitz Non-Smooth Convex Optimization using Stochastic (or Deterministic) Mirror Descent
 5.12.2017  Konstantin Mischenko  SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient