Full list of papers
Journal Paper:
Zhang, J., Wang, M., Hong, M., & Zhang, S. 2023. Primal-dual first-order methods for affinely constrained multi-block saddle point problems. SIAM Journal on Optimization, Accepted, to appear soon. [paper]
Huang, K., Zhang, J., & Zhang, S., 2022. Cubic regularized newton method for saddle point models: a global and local convergence analysis. Journal of Scientific Computing, 2022, 91(2): 60. [paper]
Hong, M., Zeng, S., Zhang, J., & Sun, H., 2022. On the Divergence of Decentralized Non-Convex Optimization. SIAM Journal on Optimization, 2022, 32(4): 2879-2908. [paper]
Zhang, J., Hong, M. and Zhang, S., 2022. On lower iteration complexity bounds for the saddle point problems. Mathematical Programming. 2022, 194(1-2): 901-935. [paper]
Zhang, J., & Xiao, L., 2021. Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization. Mathematical Programming. 2021, 1-43. [paper]
Davis, D., Drusvyatskiy, D., Xiao, L. and Zhang, J., 2021. From Low Probability to High Confidence in Stochastic Convex Optimization. Journal of Machine Learning Research, 22, pp.49-1. [paper]
Zhang, J., Bedi, A. S., Wang, M., & Koppel, A., 2021. Cautious Reinforcement Learning via Distributional Risk in the Dual Domain. IEEE Journal on Selected Areas in Information Theory. [paper]
Zhang, J. and Xiao, L., 2021. Adaptive stochastic variance reduction for subsampled Newton method with cubic regularization. Informs Journal on Optimization. To appear. [paper]
Zhang, J. and Xiao, L., 2021. MultiLevel Composite Stochastic Optimization via Nested Variance Reduction. SIAM Journal on Optimization, 31(2), pp.1131-1157. [paper]
Zhang, J., Ma, S. and Zhang, S., 2020. Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis. Mathematical Programming, 184(1), pp.445-490. [paper]
Zhang, J., Liu, H., Wen, Z. and Zhang, S., 2018. A sparse completely positive relaxation of the modularity maximization for community detection. SIAM Journal on Scientific Computing, 40(5), pp.A3091-A3120. [paper]
Causey, J. L., Zhang, J., Ma, S., Jiang, B., Qualls, J. A., Politte, D. G., ... & Huang, X. (2018). Highly accurate model for prediction of lung nodule malignancy with CT scans. Scientific reports, 8(1), 1-12. [paper] (Equal contribution 1st author).
Zhang, J., Wen, Z., & Zhang, Y., 2017. Subspace methods with local refinements for eigenvalue computation using low-rank tensor-train format. Journal of Scientific Computing, 70(2), 478-499. [paper]
Conference Proceedings:
Chen, F., Zhang, J., Wen Z., 2022, A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP. Advances in Neural Information Processing Systems (NeurIPS). [paper]
Zhang, J., Bedi, A. S., Wang, M., & Koppel, A. 2022. MARL with General Utilities via Decentralized Shadow Reward Actor-Critic. AAAI Conference on Artificial Intelligence (AAAI). [paper]
Zhang, J., Ni, C., Yu, Z., Szepesvari, C., & Wang, M., 2021. On the convergence and sample efficiency of variance-reduced policy gradient method. Advances in Neural Information Processing Systems (NeurIPS). [paper] (selected as spotlight paper)
Zhang, J., Hong, M., Wang, M. and Zhang, S., 2021, March. Generalization bounds for stochastic saddle point problems. International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 568-576. PMLR. [paper]
Zhang, J., Koppel, A., Bedi, A.S., Szepesvari, C. and Wang, M., 2020. Variational policy gradient method for reinforcement learning with general utilities. Advances in Neural Information Processing Systems (NeurIPS), 33, pp.4572-4583. [paper] (selected as spotlight paper)
Wang, L., Wu, W., Zhang, J., Liu, H., Bosilca, G., Herlihy, M., & Fonseca, R. (2020, June). FFT-based Gradient Sparsification for the Distributed Training of Deep Neural Networks. International Symposium on High-Performance Parallel and Distributed Computing, pp. 113-124. [paper]
Zhang, J. and Xiao, L., 2019, May. A composite randomized incremental gradient method. International Conference on Machine Learning (ICML), pp. 7454-7462, PMLR. [paper]
Zhang, J. and Xiao, L., 2019. A stochastic composite gradient method with incremental variance reduction. Advances in Neural Information Processing Systems (NeurIPS), 32, pp.9078-9088. [paper]
Under Review:
Zhu, Z., Chen, F., Zhang, J., & Wen, Z. (2022). A Unified Primal-Dual Algorithm Framework for Inequality Constrained Problems. arXiv preprint arXiv:2208.14196.
Ke, Z., Wen, Z., & Zhang, J., 2023. Provably Efficient Gauss-Newton Temporal Difference Learning Method with Function Approximation. arXiv preprint arXiv:2302.13087 [paper]
Zhang, J., & Hong, M., 2020. First-Order Algorithms Without Lipschitz Gradient: A Sequential Local Optimization Approach. arXiv preprint arXiv:2010.03194. [paper]
Unpublished Technical Notes:
Zhang, J., & Zhang, S. (2018). A cubic regularized newton's method over Riemannian manifolds. arXiv preprint arXiv:1805.05565. [paper]