Variational quantum algorithms (VQAs) rely on a classical optimizer to update the parameters of a quantum ansatz, but current quantum hardware introduces noise from both measurement statistics and device imperfections. Although model-based derivative-free optimizers have shown promising empirical performance in VQAs, they were not designed with noise in mind. This work incorporates recent advances from noise-aware numerical optimization into derivative-free model-based methods, yielding a new class of noise-aware derivative-free optimizers for VQAs. We study an implementation of such noise-aware derivative-free model-based methods and compare its performance on demonstrative VQA simulations to classical solvers packaged in scikit-quant.
Published Paper:
J. Larson, M. Menickelly, J. Shi (2024). A Novel Noise-Aware Classical Optimizer for Variational Quantum Algorithms. INFORMS Journal on Computing.
We develop, analyze, and implement algorithms for constrained stochastic optimization. These problems are pivotal in numerous areas, including machine learning, deep learning, statistics, stochastic optimal control, optimal power flow, multi-stage modeling, and portfolio optimization. Specifically, we have been developing a series of Stochastic Sequential Quadratic Programming methods that incorporate strategies such as adaptive step sizes, adaptive sample sizes, variance reduction techniques, step decomposition, and second-order matrix approximations.
Published Paper:
A. S. Berahas, J. Shi, Z. Yi, B. Zhou (2023). Accelerating Stochastic Sequential Quadratic Programming for Equality Constrained Stochastic Optimization using Predictive Variance Reduction. Computational Optimization and Applications: 1-38.
Paper Under Revision:
A. S. Berahas, J. Shi, R. Bollapragada (2024). Modified Line Search Sequential Quadratic Methods for Equality-Constrained Optimization with Unified Global and Local Convergence Guarantees. arXiv preprint arXiv:2406.11144.
A. S. Berahas, J. Shi, B. Zhou (2025). Optimistic Noise-Aware Sequential Quadratic Programming for Equality Constrained Optimization with Rank-Deficient Jacobians. arXiv preprint arXiv:2503.06702
Stochastic approximation (SA) is a powerful class of iterative algorithms for nonlinear root-finding and stochastic optimization. Although the SA algorithm is effective in practice because of its guarantees in asymptotic convergence results, the convergence theory is based on an important condition that "the iterates is contained within an Euclidean ball", which motivates us to develop effective constrained SA algorithm. In the first project, to overcome the drawback of classical projection SA algorithm, we make the projection SA algorithm computable in nonlinear constraints by using SQP to solve the projection subproblem. Many stochastic optimization problems in multi-agent systems can be decomposed into smaller subproblems or reduced decision subspaces. The cyclic and distributed approaches are two widely used strategies for solving such problems. In the second project, we review four existing methods for addressing these problems and compare them based on their suitable problem frameworks and update rules.
Published Papers:
J. Shi, J. C. Spall (2021). SQP-based Projection SPSA Algorithm for Stochastic Optimization with Inequality Constraints. American Control Conference (ACC), IEEE.
Manuscript:
J. Shi, J. C. Spall (2024). Difference Between Cyclic and Distributed Approach in Stochastic Optimization for Multi-agent System. arXiv preprint arXiv:2409.05155.
Working Paper:
A. S. Berahas, R. Bollapragada, S. Gupta, J. Shi (2025+). Adaptive Stochastic Variance Reduction Gradient Method for Nonconvex Optimization in Machine Learning.
A. S. Berahas, J. Shi, B. Zhou (2025+). Noisy Trust Region Subproblem