Zhang Junyu
Assistant Professor
Department of Industrial Systems Engineering and Management
National University of Singapore
Research Interest:
Saddle point problems algorithm design & analysis, complexity lower bounds
Stochastic optimization: variance reduction methods, sample complexity analysis
Optimization theories of Reinforcement Learning
Composite optimization: prox-linear methods, stochastic composite optimization
Riemannian optimization
Hiring. Currently, I'm looking for one Ph.D. student and one postdoctoral researcher.
For the Ph.D. student, background from mathematics, computer science, and statistics are favored, as my recent research projects are mostly theory oriented. If you are interested in my research topics listed above and you have good math & coding skill, please contact me. If you have matching background but you are unfamiliar with what my research is, you are also welcome to contact and discuss with me.
For the postdoctoral position, I'm looking for researchers who are familiar with reinforcement learning or optimization topics such as saddle point problem or stochastic optimization. If you are working on other topics and you believe that your research may have connection with my research area, you are also welcome to contact me and have a discussion.
Selected Journal Paper:
Zhang, J., Wang, M., Hong, M., & Zhang, S. 2023. Primal-dual first-order methods for affinely constrained multi-block saddle point problems. SIAM Journal on Optimization, Accepted, to appear soon. [paper]
Hong, M., Zeng, S., Zhang, J., & Sun, H., 2022. On the Divergence of Decentralized Non-Convex Optimization. SIAM Journal on Optimization, 2022, 32(4): 2879-2908. [paper]
Zhang, J., Hong, M. and Zhang, S., 2022. On lower iteration complexity bounds for the saddle point problems. Mathematical Programming. 2022, 194(1-2): 901-935. [paper]
Zhang, J., & Xiao, L., 2021. Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization. Mathematical Programming. 2021, 1-43. [paper]
Davis, D., Drusvyatskiy, D., Xiao, L. and Zhang, J., 2021. From Low Probability to High Confidence in Stochastic Convex Optimization. Journal of Machine Learning Research, 22, pp.49-1. [paper]
Zhang, J. and Xiao, L., 2021. MultiLevel Composite Stochastic Optimization via Nested Variance Reduction. SIAM Journal on Optimization, 31(2), pp.1131-1157. [paper]
Zhang, J., Ma, S. and Zhang, S., 2020. Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis. Mathematical Programming, 184(1), pp.445-490. [paper]
Zhang, J., Liu, H., Wen, Z. and Zhang, S., 2018. A sparse completely positive relaxation of the modularity maximization for community detection. SIAM Journal on Scientific Computing, 40(5), pp.A3091-A3120. [paper]
Selected Conference Proceedings:
Chen, F., Zhang, J., Wen Z., 2022, A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP. Advances in Neural Information Processing Systems (NeurIPS). [paper]
Zhang, J., Ni, C., Yu, Z., Szepesvari, C., & Wang, M., 2021. On the convergence and sample efficiency of variance-reduced policy gradient method. Advances in Neural Information Processing Systems (NeurIPS). [paper] (selected as spotlight paper)
Zhang, J., Hong, M., Wang, M. and Zhang, S., 2021, March. Generalization bounds for stochastic saddle point problems. International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 568-576. PMLR. [paper]
Zhang, J., Koppel, A., Bedi, A.S., Szepesvari, C. and Wang, M., 2020. Variational policy gradient method for reinforcement learning with general utilities. Advances in Neural Information Processing Systems (NeurIPS), 33, pp.4572-4583. [paper] (selected as spotlight paper)
Zhang, J. and Xiao, L., 2019, May. A composite randomized incremental gradient method. International Conference on Machine Learning (ICML), pp. 7454-7462, PMLR. [paper]
Zhang, J. and Xiao, L., 2019. A stochastic composite gradient method with incremental variance reduction. Advances in Neural Information Processing Systems (NeurIPS), 32, pp.9078-9088. [paper]