Research
Integer Programming
Differing Columns of Δ-modular Integer Programs
Studied the parameter Δ of integer programs defined by equations and box constraints (Δ is the maximum absolute determinants among the basis of constraint matrix)
Gave the first column number bound that is polynomial in both Δ and the number of equations
Gave a tight bound on the number of differing columns in a bimodular constraint matrix, which is the first tight bound since Heller's result on TU matrices
Joseph Paat, Ingo Stallknecht, Zach Walsh, Luze Xu. On the column number and forbidden sumatrices of Delta-modular matrices. Accepted in SIAM Journal on Discrete Mathematics, 2023.
Jon Lee, Joseph Paat, Ingo Stallknecht, Luze Xu. Polynomial upper bounds on the number of differing columns Δ-modular integer programs. Mathematics of Operations Research, 2022.
Proximity
Aimed to give a tighter bound or show the tightness of the bound for the distance between the optimal solution of IP (MIP) and its LP relaxation
Gave a tighter bound for mixed-integer linear optimization with k-regular coefficient matrices (k-regular is related with circuit imbalanced measure)
Gave the first polynomial bound in both m and Δ on proximity by using the recent sparsity results
Jon Lee, Joseph Paat, Ingo Stallknecht, Luze Xu. Improving proximity bounds using sparsity. ISCO 2020, 115-127. Lecture Notes in Computer Science, Springer.
Luze Xu, Jon Lee. On proximity for k-regular mixed-integer linear optimization. In: WCGO 2019: Optimization of Complex Systems: Theory, Models, Algorithms and Applications, 2020, Le Hoai An Thi, Hoai Minh Le, Tao Pham Dinh, eds. pp. 438-447.
Mixed-integer nonlinear optimization
Gaining or Losing Perspective
Studied the MINLO (Mixed-Integer Nonlinear Optimization) formulations of the disjunction: y is zero when x is zero, and y is a convex function when x belongs to an interval not including 0.
Investigated various relaxations related to piecewise-linear under-estimators of the convex function
Used volume as a metric to characterize the optimal placement of linearization points for convex power functions and analyzed the monotone behavior of Newton's method on our volume minimization problem
Luze Xu, Jon Lee. (2022). Gaining or losing perspective for convex multivariate functions on a simplex.
Jon Lee, Daphne Skipper, Emily Speakman, Luze Xu. (2020). Gaining or losing perspective for piecewise-linear under-estimators of convex univariate functions. Journal of Optimization Theory and Applications, 2023.
More Virtuous Smoothing
Aimed to smooth univariate functions with some non-smooth properties as square root function in the context of global optimization of MINLO (Mixed-Integer Nonlinear Optimization)
Used a successful approach from D' Ambrosio et al to replace part of the original function on [0,delta] with a homogeneous cubic, matching the function and its first two derivatives at delta
Gave a necessary and sufficient condition for virtuous smoothing functions to keep the good properties of original functions, such as increasing and concave
Generalized the previous lower bound and better bound results of virtuous smoothing functions to all root functions and beyond
Sparse Generalized Inverses
Aimed to balance the tradeoff between properties of the Moore-Penrose pseudoinverse and alternative sparser generalized inverses via techniques of sparse optimization
Designed new block construction recipes and approximation algorithms to a 1-norm minimizing symmetric reflexive generalized inverse of a symmetric matrix
Designed new block construction recipes and approximation algorithms to a 1-norm minimizing ah-symmetric (or ha-symmetric) reflexive generalized inverse
Analyzed the approximation ratios of the local search approximation algorithms
Marcia Fampa, Jon Lee, Gabriel Ponte, Luze Xu. Experimental analysis of local search for sparse reflexive generalized inverses. To appear in Journal of Global Optimization.
Luze Xu, Marcia Fampa, Jon Lee. (2020). 1-norm minimization and minimum-rank structured sparsity for symmetric and ah-symmetric generalized inverses: rank one and two.