Efficient Algorithms for Regularized Nonnegative Scale-invariant Low-rank Approximation Models

This git repository provides a new framework for computing the "Homogeneous Regularized Scale-invariant" (HRSI) low-rank approximation models which are formulated as follows:

min_{i <= n, X_i \in R^{m_i x r}} f({X_i}_{i <=n})  + \sum_i^n \mu_i \sum_{q}^r g_i (X_i[:.,q])    (1)

where f is a continuous map, {\mu_i }_{i <= n} is a set of positive regularization hyperparameters, and {g_i }_{i <= n} is a set of lower semi-continuous regularization maps. 

This framework allows to design efficient algorithms with convergence guarantees to critical points of Problem (1) under mild assumptions, and relying on three key ingredients (1) the Cyclic Block-Coordinate Descent (BCD), (2) the Majorization-Minimization framework and (3) an efficient balancing strategy.  We showcase our contributions on sparse NMF, ridge-regularized CPD and sparse NTD, see the preprint for more details:

J. E. Cohen and V. Leplat. "Efficient Algorithms for Regularized Nonnegative Scale-invariant Low-rank Approximation Models ", 2024. [arXiv]

Block Majorization-Minimization with Extrapolation and Applications to beta-NMF

This git repository provides a new optimization framework for solving a class of multi-convex optimization problems based on the Majorization-Minimization principle with new extrapolation rules. We apply the new algorithms for solving NMF with beta-divergences, see the preprint for more details:

L.-T.-K. Hien, V. Leplat and N. Gillis. "Block Majorization Minimization with Extrapolation and Application to β-NMF ", 2024. [arXiv]

Nonnegative Tensor Decomposition via Collaborative Neurodynamic Optimization

This git repository provides a new framework for computing a nonnegative tensor decomposition of an input tensor using the principles of the collaborative neurodynamic optimization, see the preprint for more details:

S. Ahmadi-Asl, V. Leplat, Anh Huy Phan and A. Cichocki. "Nonnegative Tensor Decomposition Via Collaborative Neurodynamic Optimization ", 2023. [on request]

A new Tensor Network - Tubal Tensor Train

This git repository provides the MatLab implementation for the Tubal Tensor Train (TTT) decomposition, a novel tensor decomposition model that effectively mitigates the curse of dimensionality inherent in the Tensor Singular Value Decomposition (T-SVD). The TTT decomposition represents an $N$-order tensor as the Tubal product (T-product) of a series of two third-order and $(N-3)$ fourth-order core tensors, contracted, see the preprint for more details:

S. Ahmadi-Asl, V. Leplat, Anh Huy Phan and A. Cichocki. "A New Tensor Network: Tubal Tensor Train Network and Its Applications ", 2023. [on request]

Deep Nonnegative Matrix Factorization with Beta Divergences

This git repository provides a new framework for computing a deep nonnegative matrix factorization of an input matrix with the family of beta-divergences as loss functions with and without minimum-volume regularizations, see the preprint for more details:

V. Leplat, L.-T.-K. Hien, N. Gillis and A. Onwunta. "Deep Nonnegative Matrix Factorization with Beta Divergences", 2023. [arXiv]

Direct Exoplanet Detection using L-1 Norm Low-Rank Approximation

This git repository provides the Python implementation of a new method to perform exoplanet detection with direct imaging based on the L-1 norm low-rank approximation, see the preprint:

H. Daglayan, S. Vary, V. Leplat, N. Gillis and P.-A. Absil, "Direct Exoplanet Detection using L-1 Norm Low-rank Approximation", Accepted at BeNeLearn 2023 conference. [arXiv]

Nonnegative Block Term Decomposition with the beta-divergence

This git repository provides the MatLab implementation for computing the block term decomposition of an input tensor with the beta-divergence as fitting function, see the following paper for more details:

C. Prevost and V. Leplat, "Nonnegative Block Term Decomposition with the beta-divergence for data fusion and blind spectral unmixing". Accepted in ICASSP 2023, Rhodes Island, Greece. [doi], Long format available here: [hal] 

NAG-GS: Semi-Implicit, Accelerated and Robust Stochastic Optimizers

This git repository provides the Pytorch implementation for NAG-GS, a new stochastic optimizer, see the preprint:

V. Leplat,  D. Merkulov, K. Katrutsa, D. Bershatsky and I. Oseledets, "NAG-GS: semi-implicit, Accelerated and Robust Stochastic Optimizers", 2022[arXiv]

Conic-based algorithms for nonnegative matrix factorization

This Matlab code provides a new framework for computing nonnegative matrix factorizations based on conic-optimization, see the paper:

V. Leplat, Y.Nesterov, N. Gillis and F. Glineur, "Conic-Optimization based Algorithms for Nonnegative Matrix Factorization", Accepted in Optimization methods and Softwares, 2022. [doi] 

** This work has been recently extended by our master thesis student Matthieu Boucquey from UCLouvain, who improved the code to tackle higher dimensional cases, along with the use of efficient meta-heuristics. 

The MatLab code can be found here, and the master thesis can be found here for more details on the methods and numerical tests. **

Nonnegative Tucker Decomposition with beta-divergence

This webpage provides a demo of the algorithms developed to compute a nonnegative Tucker decomposition for an input Tensor, the numerical tests focus on the application of "Music structure analysis", see the paper:

A. Marmoret, F. Voorwinden, V. Leplat,  J. Cohen and F. Bimbot, "Nonnegative Tucker Decomposition with Beta-divergence for Music Structure Analysis of audio signals", GRETSI 2022, Sep. 06-09, Nancy, France.  [arXiv]

Minimum-Volume IS-NMF

This Matlab code provides a new algorithm for solving a pariticular instance of IS-NMF that is: 

min_{W,H} D_IS(V|WH) + lambda*logdet(W'*W+delta*I)

s.t.               W,H >=0, e^TW=rho*e^T

This algorithm is an update to the one initially proposed in
V. Leplat, N. Gillis and A.M.S. Ang, "Blind Audio Source Separation with Minimum-Volume Beta-Divergence NMF", IEEE Transactions on Signal Processing 68, pp. 3400-3410, 2020. [doi] [arXiv]

To do so, we used the new general optimization framework proposed in
V. Leplat, N. Gillis and J. Idier, "Multiplicative Updates for NMF with β-Divergences under Disjoint Equality Constraints", SIAM Journal on Matrix Analysis and Applications 42.2 (2021), pp. 730-752.  [doi] 

Numerical tests details are given in the readme file. In a nutshell, there are three new scripts for tests:

(grouped) Consistent complex-NMF

This Matlab code allows you to solve the complex NMF problem with consistency penalty: Given an m-by-n complex matrix X and a factorization rank r, consistent complex-NMF is the problem of finding a m-by-r nonnegative matrix W, a r-by-n nonnegative matrix  and a m-by-n complex matrix Theta such that D(X|F)+ R(F) is minimized,  where F=l(W,H,Theta), D(X|F) =||X-F|| _F ^2, R is a regularization function that promotes consistent solutions for the factorization. For more details, see:

A.M.S. Ang, V. Leplat and N. Gillis, "Fast algorithm for complex-NMF with application to source separation", 2021 29th European Signal Processing Conference (EUSIPCO), 2021, pp. 2084-2088. [doi

Penalized β-NMF with disjoint linear constraints

This code provides the implementation of multiplicative updates for three NMF models minimizing Dβ(V,WH) over variables W≥0 and H≥0, namely (1) β-NMF with sum-to-one constraints on the columns of H, (2) minimum-volume NMF with sum-to-one constraints on the columns of W, and (3) sparse NMF with ℓ1 penalty for the rows of H and ℓ2 norm constraints on the columns of W. These three algorithms were designed following the framework described in the paper.

V. Leplat, N. Gillis and J. Idier, "Multiplicative Updates for NMF with β-Divergences under Disjoint Equality Constraints", SIAM Journal on Matrix Analysis and Applications 42.2 (2021), pp. 730-752.  [doi] 

Multi-resolution β-NMF for blind spectral unmixing

This Matlab code tackles the following optimization problem: Given two nonnegative matrices X of size Fx-by-Nx and Y of size Fy-by-Ny such that Fy>Fx and Nx > Ny and a factorization rank K, MR-β-NMF is the problem of finding an Fy-by-K matrix W and an K-by-Nx nonnegative matrix H with ||W(:,j)||1=1 for all j such that Dβ(X,RWH) +Dβ(Y,WHS) is minimized, where  Dβ(X,RWH) (resp. Dβ(Y,WHS)) is the β-divergence between X (resp. Y) and RWH (resp. WHS), R and S are respectively the frequency and temporal downsampling operators potentially unknown. The code handles all the values for β and the scenarios where downsampling operators R and S need to be estimated from the input data matrices X and Y; see the paper

V. Leplat, N. Gillis and C. Févotte, "Multi-Resolution Beta-Divergence NMF for Blind Spectral Unmixing", Signal Processing, 2021. [arXiv] [doi]

Remark: this variant of code is dedicated to HSI-Fusion experiments.


Minimum-Volume β-NMF

This Matlab code tackles the following optimization problem: Given an m-by-n nonnegative matrix V and a factorization rank r, min-vol β-NMF is the problem of finding an m-by-r nonnegative matrix W and an r-by-n nonnegative matrix H with ||W(:,j)||1=1 for all j such that Dβ(V,WH) + λ logdet(WTW+ δ I) is minimized, where Dβ(V,WH) is the β-divergence between V and WH. The code handles β=1 (Kullback-Leibler divergence) and β=0 (Itakura-Saito divergence); see the paper

V. Leplat, N. Gillis and A.M.S. Ang, "Blind Audio Source Separation with Minimum-Volume Beta-Divergence NMF", IEEE Transactions on Signal Processing 68, pp. 3400-3410, 2020. [doi] [arXiv]


Distributionally Robust and Multi-Objective NMF

This Matlab code allows you to solve the NMF problem where the objective function is a weighted sum of several β-divergences measuring the error between the input matrix X and the factorization WH, where W≥0 and H≥0. It also allows you to solve the distributionally robust NMF problem where the goal is to minimize the maximum value among the β-divergences. 

See N. Gillis, L. T. K. Hien, V. Leplat and V. Y. F. Tan, "Distributionally Robust and Multi-Objective Nonnegative Matrix Factorization", January 2019. [arXiv] 


Minimum-Volume NMF

This Matlab code tackles the following optimization problem: Given an m-by-n nonnegative matrix X and a factorization rank r, min-vol NMF is the problem of finding an m-by-r nonnegative matrix W and an r-by-n nonnegative matrix H with ||H(:,j)||1≤1 for all j such that ||X - WH||2F + λ logdet(WTW+ δ I) is minimized.

See V. Leplat, A.M.S. Ang, N. Gillis, "Minimum-Volume Rank-Deficient Nonnegative Matrix Factorizations", ICASSP 2019, May 12-17, 2019, Brighton, UK. [doi] [pdf]