KAUST - Fall semester

KAUST, Building 1, room 3119
Sundays, 12:00 - 13:30 (lunch provided)


Date Speaker Paper
3.2.2019Konstantin MishchenkoFinito: A Faster, Permutable Incremental Gradient Method for Big Data Problems
10.2.2019Eduard Gorbunov An Accelerated Method for Derivative-Free Smooth Stochastic ConvexOptimization
17.2.2019Egor Shulgin Train faster, generalize better: Stability of stochastic gradient descent
24.2.1019Xun Qian  Curvature-aided Incremental Aggregated Gradient Method
10.3.2019Elnur Gasanov  Alternating Randomized Block Coordinate Descent
17.3.2019Filip Hanzely The Complexity of Making the Gradient Small in Stochastic Convex Optimization
31.3.2019Samuel Horvath The Convergence of Sparsified Gradient Methods
7.4.2019Alibek Sailanbayev Stochastic Gradient Descent Escapes Saddle Points Efficiently
14.4.2019Konstantin Mishchenko Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication
21.4.2019Adil Salim Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization
28.4.2019 Nicolas Loizou  Ongoing own research
05.5.2019Alibek Sailanbayev  Spurious Local Minima are Common in Two-Layer ReLU Neural Networks


 Date Speaker Paper
30.8.2018Sarah SachsGeneralization of Jacobian Sketching (Master Thesis)
6.9.2018
Samuel Horvath

Stochastic Nested Variance Reduction for Nonconvex Optimization

9.9.2018Alibek SailanbayevOptimization of composite functions 
16.9.2018Elnur GasanovStochastic Spectral and Conjugate Descent Methods
30.9.2018Dmitry KovalevAccelerated Probabilistic SVRG and SAGA
7.10.2018Filip HanzelyMultiple Adaptive Bayesian Linear Regression forScalable Bayesian Optimization with  Warm Start 
14.10.2018 El Houcine BergouRotation Averaging
 21.10.2018 Aritra Dutta Matrix Completion Under Interval Uncertainty
 4.11.2018 Sebastian Stich k-SVRG: Variance Reduction for Large-Scale Optimization
 18.11.2018 Samuel Horvath Natasha: Faster Non-Convex Stochastic OptimizationVia Strongly Non-Convex Parameter
 25.11.2018 BDO group ICML potential projects 
 2.12.2018 Samuel Horvath Random Shuffling Beats SGD after Finite Epochs
 9.12.2018 Xun Qian The Convergent Generalized Central Paths for Linearly Constrained Convex Programming


 Date  Speaker              Paper
13.2.2018  Nicolas Loizou     Random inexact projection methods
 20.2.2018     Filip HanzelyThe Implicit Bias of Gradient Descent on Separable Data
27.2.2018 El Houcine Bergou    Random direct search method for unconstrained minimization.
4.3.2018 Samuel Horvath Fast Incremental Method for Nonconvex Optimization
11.3.2018 Alibek Sailanbayev SIGNSGD: compressed optimisation for non-convex problems
18.3.2018 Konstantin Mishchenko Penalty reformulation for constrained optimization
25.3.2018 Adel Bibi Analytic Expressions for Probabilistic Moments
 of PL-DNN with Gaussian Input
8.4.2018 Konstantin Mishchenko A Simple Practical Accelerated Method for Finite Sums
15.4.2018 Filip Hanzely

On the Convergence of Adam and Beyond

29.4.2018 Samuel Horvath

Second order Stochastic Optimization for Machine 

Learning in Linear Time

6.5.2018 Matthias Mueller

Optimization for Deep Learning 

27.5.2018 El Houcine Bergou

A Line-Search Algorithm Inspired by the Adaptive Cubic Regularization

 Framework and Complexity Analysis

 
 

 Date  Speaker              Paper
22.8.2017  Aritra Dutta      A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices (Dutta, Li and Richtárik - 7/2017)
 29.8.2017  Filip Hanzely  Relatively-Smooth Convex Optimization by First-Order Methods, and Applications (Lu, Freund and Nesterov - 10/2016)
 12.9.2017  Filip  Hanzely      Randomized methods for relative smooth optimization
 19.9.2017  Aritra Dutta  Self-Occlusion and Disocclusion in Causal Video Object Segmentation
 26.9.2017  Konstantin  Mishchenko An Asynchronous Distributed Prox-Grad Algorithm
 3.10.2017  Konstantin  Mishchenko  An Asynchronous Distributed Prox-Grad Algorithm
 10.10.2017 Alibek Sailanbayev Breaking Locality Accelerates Block Gauss-Seidel
 17.10.2017 Sebastian Stich  Approximate steepest coordinate descent
 24.10.2017  Viktor Lukáček  Dykstras algorithm with bregman projections: A convergence proof
 31.10.2017 Nikita Doykov    Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
Cubic regularization of Newton method and its global performance  
 7.11.2017  Konstantin Mischenko  Proximal-Proximal-Gradient Method
 14.11.2017 Nicolas Loizou  First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization
 21.11.2017  Robert Gower SAGA is a variant of stochastic gradient: new view and new proof
 28.11.2017  Filip  Hanzely     " Relative-Continuity" for Non-Lipschitz Non-Smooth Convex Optimization using Stochastic (or Deterministic) Mirror Descent
 5.12.2017  Konstantin Mischenko  SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient