Software

  1. Approximation Algorithms for D-optimal Design [codes]

    • We design greedy and local search for solving D-optimal design problems

    • The software is scalable and can be used for large-scale instances

    • References:

(a) Singh, M., Xie, W. (2020). Approximation Algorithms for D-optimal Design. Mathematics of Operations Research, 45(4), 1193-1620. (Authors in alphabetical order) [preprint][slides]

(b) Madan, V., Singh, M., Tantipongpipat, U., Xie, W. (2019). Combinatorial Algorithms for Optimal Design. In COLT 2019: Conference on Learning Theory (pp. 2210-2258). (Authors in alphabetical order) [paper]

  1. Unbiased Subdata Selection for Fair Classification [codes]

  • We design an iterative refining strategy (IRS) to solve the large-scale instances of fair classification

  • The software will both improve the classification accuracy and conduct the unbiased subdata selection in an alternating fashion

  • It applies to fair SVM, fair logistic regression, fair CNNs

  • Reference: Ye, Q.*, Xie, W. (2020). Unbiased Subdata Selection for Fair Classification: A Unified Framework and Scalable Algorithms. Submitted. [preprint][slides]

  1. ALSO-X for Solving a Chance-Constrained Program [codes]

  • We develop and generalize the ALSO-X algorithm, originally proposed by Ahmed, Luedtke, SOng, and Xie (2017), for solving a chance-constrained program

  • We improve ALSO-X with an alternating minimization subroutine, termed "ALSO-X+" algorithm

  • The software will improve the accuracy of the well-known convex approximation, CVaR method

  • Reference: Jiang, N., Xie, W. (2020). ALSO-X is Better Than CVaR: Convex Approximations for Chance Constrained Programs Revisited. Submitted. [preprint][slides]

  1. Software for Solving the Maximum Entropy Sampling Problem [codes]

  • We design and analyze the local search algorithm to solve the maximum entropy sampling problem with very large-scale instances

  • Our algorithm can be numerically demonstrated to yield less than 1% optimality gap

  • Reference: Li, Y.*, Xie, W. (2020). Best Principal Submatrix Selection for the Maximum Entropy Sampling Problem: Scalable Algorithms and Performance Guarantees. Submitted. [preprint][poster][slides][video]

  1. Software for Cluster-aware Supervised Learning (CluSL) [codes]

  • We design Cluster-aware Supervised Learning (CluSL) frameworks and algorithms to explore clustering structures and improve learning results in the supervised learning

  • The software will be applied to cluster-wise regression, cluster-wise classification, cluster-wise CNNs

  • It improves the conventional packages such as random forests, SVC, CNNs

  • Reference: Chen, S., Xie, W. (2020). On the Cluster-aware Supervised Learning (CluSL): Frameworks, Convergent Algorithms, and Applications. INFORMS Journal on Computing. Accepted. [preprint][slides]

  1. Software for Sparse Ridge Regression [codes]

  • We design a fast implementation for the greedy (i.e., forward selection) method on solving the sparse ridge regression

  • The software can be 10 times faster than the state-of-art

  • Reference: Xie, W., Deng, X. (2020). Scalable Algorithms for the Sparse Ridge Regression. SIAM Journal on Optimization, 30(4), 3359–3386. [preprint][slides]