Randomized algorithms are becoming popular as data science and neural networks are in fashion. The traditional randomized algorithms, especially the stochastic gradient descent, have been studied for many years. Such algorithms are also relevant to high-dimensional probability, random matrix theory, etc.
Matrix version of AM-GM inequality. This is completely nontrivial for non-commutable positive definite matrices. The general case is not known yet. [The version of Recht & Re (2008) is NOT true in general for m greater than 4, see Lai & Lim (2020), but m = 4 is an open problem, see here for the code. The other version of Duchi (2012), see Israel (2016) for proof of m=3]
A further question would be deriving the lower/upper bounds for the Loewner form after homogenization. A straightforward conjecture is: The upper bound is always sharp as n!/(n-m)! and the lower bound is bounded by -Cn!/(n-m)! for certain C independent of m,n. Moreover, is the limit of C(m,n) as n goes to infinity equal to 1, that is, the asymptotic sharpness?
For a wide class of inverse problems solved with an optimization approach, the most important issue is the non-convexity of the landscape. Regularization might help, but not guaranteed. Numerical methods for general non-convex problems will require stochastic approaches, but for many inverse problems, the global minimum is unique, and it is yet difficult to locate through traditional gradient-based methods since one could easily get trapped at local minima. One way is to consider the simulated annealing algorithms; due to the special type of landscape, it is possible to derive a nontrivial convergence rate.
As a follow-up for dynamic step sizes for gradient descent, is it possible to derive a similar conclusion for the stochastic version of gradient descent? The scheduling of step sizes could improve the convergence rate. Moreover, if the step sizes can also depend on the "closeness to the optimal", is it possible to obtain the desired convergence rate of Newton's method?
Improve the DKW inequality in high dimensions by a correction factor of size exp(-k^4/n).