Key Words:
3D printing / Additive Manufacturing
Supply chain resilience
Large assembly manufacturing systems
Stochastic systems, stochastic combinatorial optimization
While 3D printing offers flexible, on-demand manufacturing, its transformative potential remains under-realized due to high costs and limited capacity. Accordingly, prior research has focused on niche applications like mass customization and spare parts, but we explore its untapped potential in supply chain resilience and large assembly systems.
Specifically, we propose 3DP as a resilience strategy against supply disruptions and evaluate its cost-effectiveness through a multi-stage stochastic optimization framework, capturing key decisions from capacity investment to backup allocation. Our analysis shows that 3DP is particularly effective in large systems with weakly correlated supplier risks, where a modest investment is typically enough to unlock substantial cost savings. Building on this insight, we develop a highly scalable algorithm to estimate 3DP's cost-effectiveness in large supply chains and validate our findings using real-world data from a global toy manufacturer.
In ongoing work, we study how 3DP's flexibility can accelerate large-scale assembly systems. Using stochastic systems analysis, we show that strategically deploying 3DP to target component shortages significantly outperforms strategies that accelerate each component uniformly—achieving, for example, over 50% improvement in overall production speed for sufficiently large systems.
Publications from this project:
3D-Printing for Large Assembly Systems, Ziyu He, Yuxuan Ouyang, Vishal Gupta and Andrew Daw, in preparation, 2025.
3D-Printing for Supply Chain Resilience, Ziyu He, Vishal Gupta and Nick Vyas, Manufacturing & Service Operations Management, under review, 2025.
Key Words:
Adaptive Importance Sampling
Bayesian Hierarchical Models
Markov Random Fields
This paper explores Logarithmic Integral Optimization (LIO) problems, providing a unified computational framework for various tasks in computational statistics. Key among these are Maximum Likelihood Estimation (MLE) and Maximum a Posteriori (MAP) inference for probabilistic models. Specifically, we investigate scenarios where the model consists of conditional density functions with intractable normalizers. This feature can pose substantial computational challenges for the associated LIO, especially when coupled with the growing prevalence of nonconvex and nondifferentiable modelings in contemporary applications. To address these challenges, we propose an efficient algorithm for LIO, termed Adaptive Importance Sampling-based Surrogation. This method is designed to simultaneously handle nonconvexity and nondifferentiability, while also improving the sampling approximation of the intractable integral term in LIO through variance reduction. The justification of this algorithm is supported by our analysis, which establishes an almost sure subsequential convergence to a necessary candidate for a local minimizer, referred to as a surrogation stationary point. Furthermore, we demonstrate the effectiveness of our algorithm through extensive numerical experiments, confirming its efficiency and stability in facilitating more advanced probabilistic models with intractable normalizers.
Publications from this project:
Logarithmic Integral Optimization via Adaptive Importance Sampling Based Surrogation Methods, Ziyu He, Junyi Liu and Jong-Shi Pang, Mathematical Programming, 2025.
Key Words:
Nonconvex Surrogates for Zero-Norm
Strong Polynomial Solvability
Nonconvex Parametric Programming
Hyperparameter Selection
In sparsity learning, there is a critical dilemma in choosing the regularizer to induce sparsity among model parameters. The ideal choice, namely L0 function, can be computationally prohibitive, whereas convex relaxation like L1 function, though faster to compute, can produce undesirable solutions. To bridge this gap, in this project we studied the nonconvex regularizers of folded concave type when paired up with training objectives possessing a so-called Z-property which has widespread applications in Bayesian statistics and finance. By proposing an algorithm which is guaranteed to terminate with a directional stationary solution in linear steps, we identified an important class of sparsity learning problems that are strongly polynomially solvable.
An extension of sparsity learning is to trace a path of solutions as a function of hyperparameter that connects the training objective and regularizer. In this way, the model learned under an ideal hyperparameter can be picked out given an alternative criteria. However, this task is also haunted by the dilemma of choosing the proper regularizer. In response to this, we investigated in the analytical properties and computational methods for the solution paths traced from three regularizers, namely the ideal L0, the convex relaxation L1, and nonconvex capped L1 as an in-between candidate. Through our numerical studies, we demonstrated the advantage of nonconvex regularizers in balancing superior statistical performance and computational effort which makes it an appealing option for hyperparameter selection.
Publications from this project:
Comparing Solution Paths of Sparse Quadratic Minimization with a Stietjes Matrix, Ziyu He, Shaoning Han, Andres Gomez, Ying Cui and Jong-Shi Pang, Mathematical Programming, 2023.
Linear-Step Solvability of Some Folded Concave and Singly-Parametric Sparse Optimization Problems, Andres Gomez, Ziyu He and Jong-Shi Pang, Mathematical Programming, 2022.
Key Words:
Nonconvex Robust Optimization
Adversarial Training for Deep Learning
Robust Optimization with Equilibrium Constraints
Robust optimization (RO) targets a solution to hedge against the worst case scenario caused by the uncertainty contained in a set. The key challenge in nonconvex RO is that in this case nonconvexity will commonly be coupled with the nondifferentiability brought by our robustness inducing treatments. In this project, we established the analytical properties for the computable solutions of nonconvex RO problems via a generalized saddle-point theorem and game-theoretic interpretations. A majorization-minimization algorithm is proposed as the main computing machinery to support our numerical experiments which verify the effectiveness of RO when applied to several nonconvex problems, e.g., adversarial deep learning.
Publication from this project:
Nonconvex Robust Programming via Value-Function Optimization, Ying Cui, Ziyu He and Jong-Shi Pang, Computational Optimization and Applications, 2021.
Key Words:
Deep Learning
Nonconvex Nonsmooth Mulit-Composite Optimization
Majorization-Minimization, Exact Penalty, Semi-Smooth Newton
We present a novel deterministic algorithmic framework that enables the computation of a directional stationary solution of the deep neural network training problem formulated as a multicomposite optimization problem with coupled nonconvexity and nondifferentiability. This is the first time to our knowledge that such a sharp kind of stationary solution is provably computable for a nonsmooth deep neural network. The proposed approach combines the methods of exact penalization, majorization-minimization, gradient projection, and the dual semismooth Newton method, each for a particular purpose in an overall computational scheme. Contrary to existing stochastic approaches which provide at best very weak guarantees on the computed solutions obtained in practical implementation, our rigorous deterministic treatment provides guarantee of the stationarity properties of the computed solutions with reference to the optimization problems being solved. Numerical results demonstrate the effectiveness of the framework for solving reasonably sized networks with a modest number of training samples (in the low thousands).
Publication from this project:
Multi-Composite Optimization for Training Deep Neural Networks, Ying Cui, Ziyu He and Jong-Shi Pang, SIAM Journal on Optimization, 2020.