Selected Publications
Thanh Dang, Melih Barsbey, AKM Rokonuzzaman Sonet, Mert Gurbuzbalaban, Umut Simsekli and Lingjiong Zhu "Algorithmic stability of stochastic gradient descent with momentum under heavy-tailed noise" (2025). arXiv: 2502.00885.
Hengrong Du, Qi Feng, Changwei Tu, Xiaoyu Wang and Lingjiong Zhu "Non-reversible Langevin algorithms for constrained sampling" (2025). arXiv:2501.11743.
Mert Gurbuzbalaban, Mohammad Rafiqul Islam, Xiaoyu Wang and Lingjiong Zhu "Generalized EXTRA stochastic gradient Langevin dynamics" (2024). arXiv:2412.01993.
George Rapakoulias, Ali Reza Pedram, Fengjiao Liu, Lingjiong Zhu and Panagiotis Tsiotras "Go with the flow: Fast diffusion for Gaussian mixture models" (2024). arXiv:2412.09059.
Xuefeng Gao and Lingjiong Zhu "Convergence analysis for general probability flow ODEs of diffusion models in Wasserstein distances" (2025). International Conference on Artificial Intelligence and Statistics (AISTATS 2025).
Umut Simsekli, Mert Gurbuzbalaban, Sinan Yildirim and Lingjiong Zhu "Privacy of SGD under Gaussian or heavy-tailed noise: Guarantees without gradient clipping" (2024). arXiv:2403.02051.
Xuefeng Gao, Hoang M Nguyen and Lingjiong Zhu "Wasserstein convergence guarantees for a general class of score-based generative models" (2025). Journal of Machine Learning Research. 26, 1-54.
Mert Gurbuzbalaban, Yuanhan Hu, Umut Simsekli, Kun Yuan and Lingjiong Zhu "Heavy-tail phenoemon in decentralized SGD" (2025). IISE Transactions. 57, 788-802.
Mert Gurbuzbalaban, Yuanhan Hu and Lingjiong Zhu "Penalized overdamped and underdamped Langevin Monte Carlo algorithms for constrained sampling" (2024). Journal of Machine Learning Research. 25, 1-67.
Lingjiong Zhu, Mert Gurbuzbalaban, Anant Raj and Umut Simsekli "Uniform-in-time Wasserstein stability bounds for (noisy) stochastic gradient descent" (2023). Advances in Neural Information Processing Systems. (NeurIPS 2023).
Anant Raj, Lingjiong Zhu, Mert Gurbuzbalaban and Umut Simsekli "Algorithmic stability of heavy-tailed SGD with general loss functions" (2023). International Conference on Machine Learning.
Anant Raj, Melih Barsbey, Mert Gurbuzbalaban, Lingjiong Zhu and Umut Simsekli "Algorithmic stability of heavy-tailed stochastic gradient descent on least squares" (2023). International Conference on Algorithmic Learning Theory.
Mert Gurbuzbalaban, Yuanhan Hu, Umut Simsekli and Lingjiong Zhu "Cyclic and randomized stepsizes invoke heavier tails in SGD than constant stepsizes" (2023). Transactions on Machine Learning Research.
Xuefeng Gao, Mert Gurbuzbalaban and Lingjiong Zhu "Global convergence of stochastic gradient Hamiltonian Monte Carlo for non-convex stochastic optimization: Non-asymptotic performance bounds and momentum-based acceleration" (2022). Operations Research. 70, 2931-2947.
Alireza Fallah, Mert Gurbuzbalaban, Asuman Ozdaglar, Umut Simsekli and Lingjiong Zhu "Robust distributed accelerated stochastic gradient methods for multi-agent networks" (2022). Journal of Machine Learning Research. 23, 1-96.
Mert Gurbuzbalaban, Umut Simsekli and Lingjiong Zhu "The heavy-tail phenomenon in SGD" (2021). International Conference on Machine Learning.
Alexander Camuto, Xiaoyu Wang, Lingjiong Zhu, Mert Gurbuzbalaban, Chris Holmes and Umut Simsekli "Asymmetric heavy tails and implicit bias in Gaussian noise injections" (2021). International Conference on Machine Learning.Â
Alexander Camuto, George Deligiannidis, Murat Erdogdu, Mert Gurbuzbalaban, Umut Simsekli and Lingjiong Zhu "Fractal structure and generalization properties of stochastic optimization algorithms" Advances in Neural Information Processing Systems. (NeurIPS 2021).
Hongjian Wang, Mert Gurbuzbalaban, Lingjiong Zhu, Umut Simsekli and Murat Erdogdu "Convergence rates of stochastic gradient descent under infinite noise variance" Advances in Neural Information Processing Systems. (NeurIPS 2021).
Mert Gurbuzbalaban, Xuefeng Gao, Yuanhan Hu and Lingjiong Zhu "Decentralized stochastic gradient Langevin dynamics and Hamiltonian Monte Carlo" (2021). Journal of Machine Learning Research. 22, 1-69.
Umut Simsekli, Lingjiong Zhu, Yee Whye Teh and Mert Gurbuzbalaban "Fractional underdamped Langevin dynamics: Retargeting SGD with momentum under heavy-tailed gradient noise" (2020). International Conference on Machine Learning.
Xuefeng Gao, Mert Gurbuzbalaban and Lingjiong Zhu "Breaking reversibility accelerates Langevin dynamics for global non-convex optimization" (2020). Advances in Neural Information Processing Systems. (NeurIPS 2020).
Bugra Can, Mert Gurbuzbalaban and Lingjiong Zhu "Accelerated linear convergence of stochastic momentum methods in Wasserstein distances" (2019). International Conference on Machine Learning, 891-901.