S. Bruno, Y. Zhang, D.-Y. Lim, Ö.D. Akyildiz, and S. Sabanis: On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates. TMLR, 2025. [arXiv]
A. Neufeld, M. Ng, and Y. Zhang: Non-asymptotic convergence bounds for modified tamed unadjusted Langevin algorithm in non-convex setting. Journal of Mathematical Analysis and Applications, 543(1): 128892, 2025. [arXiv]
D.-Y. Lim, A. Neufeld, S. Sabanis, and Y. Zhang: Langevin dynamics based algorithm e-THεO POULA for stochastic optimization problems with discontinuous stochastic gradient. Mathematics of Operations Research, 2024. [arXiv]
D.-Y. Lim, A. Neufeld, S. Sabanis, and Y. Zhang: Non-asymptotic estimates for TUSLA algorithm for non-convex learning with applications to neural networks with ReLU activation function. IMA Journal of Numerical Analysis, 44: 1464–1559, 2023. [arXiv]
Y. Zhang, Ö.D. Akyildiz, T. Damoulas, and S. Sabanis: Nonasymptotic estimates for Stochastic Gradient Langevin Dynamics under local conditions in nonconvex optimization. Applied Mathematics and Optimization, 87(2), 2023. [arXiv]
N. H. Chau, E. Moulines, M. Rásonyi, S. Sabanis, and Y. Zhang: On stochastic gradient Langevin dynamics with dependent data streams: the fully non-convex case. SIAM Journal on Mathematics of Data Science, 3(3): 959-986, 2021. [arXiv]
M. Barkhagen, N. H. Chau, E. Moulines, M. Rásonyi, S. Sabanis and Y. Zhang: On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case. Bernoulli, 27(1): 1-33, 2021. [arXiv]
S. Sabanis and Y. Zhang: Higher Order Langevin Monte Carlo Algorithm. Electronic Journal of Statistics.: 13(2): 3805-3850, 2019. [arXiv]
S. Sabanis and Y. Zhang: On explicit order 1.5 approximations with varying coefficients: the case of super-linear diffusion coefficients. Journal of Complexity, 50: 84-115, 2019. [arXiv]
I. Lytras, S. Sabanis, Y. Zhang: kTULA: A Langevin sampling algorithm with improved KL bounds under super-linear log-gradients. Preprint, 2025. arXiv:2506.04878
L. Liang, A. Neufeld and Y. Zhang: Non-asymptotic convergence analysis of the stochastic gradient Hamiltonian Monte Carlo algorithm with discontinuous stochastic gradient with applications to training of ReLU neural networks. Preprint, 2024. arXiv:2409.17107
A. Neufeld and Y. Zhang: Non-asymptotic estimates for accelerated high order Langevin Monte Carlo algorithms. Preprint, 2024. arXiv:2405.05679
A. Neufeld, M. Ng, and Y. Zhang: Robust SGLD algorithm for solving non-convex distributionally robust optimisation problems. Preprint, 2024. arXiv:2403.09532
S. Sabanis and Y. Zhang: A fully data-driven approach to minimizing CVaR for portfolio of assets via SGLD with discontinuous updating. Preprint, 2020. arXiv:2007.01672