Zhang Junyu
Assistant Professor
Department of Industrial Systems Engineering and Management
National University of Singapore
Research Interest:
Saddle point problems algorithm design & analysis, complexity lower bounds
Stochastic optimization: variance reduction methods, sample complexity analysis
Optimization theories of Reinforcement Learning
Composite optimization: prox-linear methods, stochastic composite optimization
Riemannian optimization
Selected Journal Paper:
Bedi, AS., Parayil, A., Zhang, J., Wang, M., Koppel, A. 2024. On the sample complexity and metastability of heavy-tailed policy search in continuous control. Journal of Machine Learning Research, 25 (39), 1-58. [paper]
Zhang, J., Wang, M., Hong, M., & Zhang, S. 2023. Primal-dual first-order methods for affinely constrained multi-block saddle point problems. SIAM Journal on Optimization, Accepted, to appear soon. [paper]
Hong, M., Zeng, S., Zhang, J., & Sun, H., 2022. On the Divergence of Decentralized Non-Convex Optimization. SIAM Journal on Optimization, 2022, 32(4): 2879-2908. [paper]
Zhang, J., Hong, M. and Zhang, S., 2022. On lower iteration complexity bounds for the saddle point problems. Mathematical Programming. 2022, 194(1-2): 901-935. [paper]
Zhang, J., & Xiao, L., 2021. Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization. Mathematical Programming. 2021, 1-43. [paper]
Davis, D., Drusvyatskiy, D., Xiao, L. and Zhang, J., 2021. From Low Probability to High Confidence in Stochastic Convex Optimization. Journal of Machine Learning Research, 22, pp.49-1. [paper]
Zhang, J. and Xiao, L., 2021. MultiLevel Composite Stochastic Optimization via Nested Variance Reduction. SIAM Journal on Optimization, 31(2), pp.1131-1157. [paper]
Zhang, J., Ma, S. and Zhang, S., 2020. Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis. Mathematical Programming, 184(1), pp.445-490. [paper]
Zhang, J., Liu, H., Wen, Z. and Zhang, S., 2018. A sparse completely positive relaxation of the modularity maximization for community detection. SIAM Journal on Scientific Computing, 40(5), pp.A3091-A3120. [paper]
Selected Conference Proceedings:
Chen, F., Zhang, J., Wen Z., 2022, A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP. Advances in Neural Information Processing Systems (NeurIPS). [paper]
Zhang, J., Ni, C., Yu, Z., Szepesvari, C., & Wang, M., 2021. On the convergence and sample efficiency of variance-reduced policy gradient method. Advances in Neural Information Processing Systems (NeurIPS). [paper] (selected as spotlight paper)
Zhang, J., Hong, M., Wang, M. and Zhang, S., 2021, March. Generalization bounds for stochastic saddle point problems. International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 568-576. PMLR. [paper]
Zhang, J., Koppel, A., Bedi, A.S., Szepesvari, C. and Wang, M., 2020. Variational policy gradient method for reinforcement learning with general utilities. Advances in Neural Information Processing Systems (NeurIPS), 33, pp.4572-4583. [paper] (selected as spotlight paper)
Zhang, J. and Xiao, L., 2019, May. A composite randomized incremental gradient method. International Conference on Machine Learning (ICML), pp. 7454-7462, PMLR. [paper]
Zhang, J. and Xiao, L., 2019. A stochastic composite gradient method with incremental variance reduction. Advances in Neural Information Processing Systems (NeurIPS), 32, pp.9078-9088. [paper]