 We present a nonnegative patch alignment framework (NPAF) to unify the NMF related dimension reduction algorithms.
 As the traditional NMF solvers has three problems: slow convergence rate, numerical instability and nonconvergence, we apply Nesterov's optimal gradient method to settle them. The multiplicative update rule (MUR) is slow for convergence applied by traditional NMF, we propose a fast gradient descent (FGD) algorithm to speed up. Besides, the learning methods of NMF require the entire dataset to reside in the memory and thus cannot be applied to largescale or streaming datasets. We extend it by an efficient online RSANMF algorithm (ORNMF) to keep both the time and space complexities constant and to be suitable for this problem.
 For practical classification tasks, NMF ignores both the local geometry of data and the discriminative information of different classes. We introduce the manifold regularization and the margin maximization to NMF and obtain the manifold regularized discriminative NMF (MDNMF).
 We presents Manhattan NMF (MahNMF) which minimizes the Manhattan distance for modeling the heavy tailed Laplacian noise and its extensions for various practical applications by boxconstrained MahNMF, manifold regularized MahNMF, group sparse MahNMF, elastic net inducing MahNMF, and symmetric MahNMF.


MahNMF: Manhattan Nonnegative Matrix FactorizationNaiyang Guan, Dacheng Tao, Zhigang Luo, John ShaweTaylorCoRR abs/1207.3438 (2012)Nonnegative matrix factorization (NMF) approximates a nonnegative matrix X by a product of two nonnegative lowrank factor matrices W and H. NMF and its extensions minimize either the KullbackLeibler divergence or the Euclidean distance between X and W^{T}H to model the Poisson noise or the Gaussian noise. In practice, when the noise distribution is heavy tailed, they cannot perform well. This paper presents Manhattan NMF (MahNMF) which minimizes the Manhattan distance between X and W^{T}H for modeling the heavy tailed Laplacian noise. Similar to sparse and lowrank matrix decompositions, e.g. robust principal component analysis (RPCA) and GoDec, MahNMF robustly estimates the lowrank part and the sparse part of a nonnegative matrix and thus performs effectively when data are contaminated by outliers. We extend MahNMF for various practical applications by developing boxconstrained MahNMF, manifold regularized MahNMF, group sparse MahNMF, elastic net inducing MahNMF, and symmetric MahNMF. The major contribution of this paper lies in two fast optimization algorithms for MahNMF and its extensions: the rankone residual iteration (RRI) method and Nesterov's smoothing method. In particular, by approximating the residual matrix by the outer product of one row ofWand one row of H in MahNMF, we develop an RRI method to iteratively update each variable of W and H in a closed form solution. Although RRI is efficient for small scale MahNMF and some of its extensions, it is neither scalable to large scale matrices nor flexible enough to optimize all MahNMF extensions. Since the objective functions of MahNMF and its extensions are neither convex nor smooth, we apply Nesterov's smoothing method to recursively optimize one factor matrix with another matrix fixed. By setting the smoothing parameter inversely proportional to the iteration number, we improve the approximation accuracy iteratively for both MahNMF and its extensions. We conduct experiments on both synthetic and realworld datasets, such as face images, natural scene images, surveillance videos and multimodel datasets, to show the efficiency of the proposed Nesterov’s smoothing methodbased algorithm for solving MahNMF and its variants, and the effectiveness of MahNMF and its variants, by comparing them with traditional NMF, RPCA, and GoDec. paper code 




Online Nonnegative Matrix Factorization With Robust Stochastic ApproximationNaiyang Guan, Dacheng Tao, Zhigang Luo, Bo YuanIEEE
Trans. Neural Netw. Learning Syst. 23(7): 10871099 (2012)
Nonnegative matrix factorization (NMF) has become a popular dimensionreduction method and has been widely applied to image processing and pattern recognition problems. However, conventional NMF learning methods require the entire dataset to reside in the memory and thus cannot be applied to largescale or streaming datasets. In this paper, we propose an efficient online RSANMF algorithm (ORNMF) that learns NMF in an incremental fashion and thus solves this problem. In particular, ORNMF receives one sample or a chunk of samples per step and updates the bases via robust stochastic approximation. Benefitting from the smartly chosen learning rate and averaging technique, ORNMF converges at the rate of O(1/√k) in each update of the bases. Furthermore, we prove that ORNMF almost surely converges to a local optimal solution by using the quasimartingale. By using a buffering strategy, we keep both the time and space complexities of one step of the ORNMF constant and make ORNMF suitable for largescale or streaming datasets. Preliminary experimental results on realworld datasets show that ORNMF outperforms the existing online NMF (ONMF) algorithms in terms of efficiency. Experimental results of face recognition and image annotation on public datasets confirm the effectiveness of ORNMF compared with the existing ONMF algorithms. paper code 





NeNMF: An Optimal Gradient Method for Nonnegative Matrix FactorizationNaiyang Guan, Dacheng Tao, Zhigang Luo, Bo YuanIEEE Transactions on Signal Processing 60(6):
28822898 (2012)
Nonnegative matrix factorization (NMF) is a powerful matrix decomposition technique that approximates a nonnegative matrix by the product of two lowrank nonnegative matrix factors. It has been widely applied to signal processing, computer vision, and data mining. Traditional NMF solvers include the multiplicative update rule (MUR), the projected gradient method (PG), the projected nonnegative least squares (PNLS), and the active set method (AS). However, they suffer from one or some of the following three problems: slow convergence rate, numerical instability and nonconvergence. In this paper, we present a new efficient NeNMF solver to simultaneously overcome the aforementioned problems. It applies Nesterov's optimal gradient method to alternatively optimize one factor with another fixed. In particular, at each iteration round, the matrix factor is updated by using the PG method performed on a smartly chosen search point, where the step size is determined by the Lipschitz constant. Since NeNMF does not use the time consuming line search and converges optimally at rate O(1/k^{2}) in optimizing each matrix factor, it is superior to MUR and PG in terms of efficiency as well as approximation accuracy. Compared to PNLS and AS that suffer from numerical instability problem in the worst case, NeNMF overcomes this deficiency. In addition, NeNMF can be used to solve L_{1}norm, L_{2}norm and manifold regularized NMF with the optimal convergence rate. Numerical experiments on both synthetic and realworld datasets show the efficiency of NeNMF for NMF and its variants comparing to representative NMF solvers. Extensive experiments on document clustering suggest the effectiveness of NeNMF. 





Manifold Regularized Discriminative Nonnegative Matrix Factorization With Fast Gradient DescentNaiyang Guan, Dacheng Tao, Zhigang Luo, Bo YuanIEEE Transactions on Image Processing 20(7): 20302048 (2011) Nonnegative matrix factorization (NMF) has become a popular datarepresentation method and has been widely used in image processing and patternrecognition problems. This is because the learned bases can be interpreted as a natural partsbased representation of data and this interpretation is consistent with the psychological intuition of combining parts to form a whole. For practical classification tasks, however, NMF ignores both the local geometry of data and the discriminative information of different classes. In addition, existing research results show that the learned basis is unnecessarily partsbased because there is neither explicit nor implicit constraint to ensure the representation partsbased. In this paper, we introduce the manifold regularization and the margin maximization to NMF and obtain the manifold regularized discriminative NMF (MDNMF) to overcome the aforementioned problems. The multiplicative update rule (MUR) can be applied to optimizing MDNMF, but it converges slowly. In this paper, we propose a fast gradient descent (FGD) to optimize MDNMF. FGD contains a Newton method that searches the optimal step length, and thus, FGD converges much faster than MUR. In addition,FGD includes MUR as a special case and can be applied to optimizing NMF and its variants. For a problem with 165 samples in R^{1600}, FGD converges in 28s, while MUR requires 282s. We also apply FGD in a variant of MDNMF and experimental results confirm its efficiency. Experimental results on several face image datasets suggest the effectiveness of MDNMF. paper code 





NonNegative Patch Alignment FrameworkNaiyang Guan, Dacheng Tao, Zhigang Luo, Bo YuanIEEE Transactions on Neural Networks 22(8): 12181230 (2011) In this paper, we present a nonnegative patch alignment framework (NPAF) to unify popular nonnegative matrix factorization (NMF) related dimension reduction algorithms. It offers a new viewpoint to better understand the common property of different NMF algorithms. Although multiplicative update rule (MUR) can solve NPAF and is easy to implement, it converges slowly. Thus, we propose a fast gradient descent (FGD) to overcome the aforementioned problem. FGD uses the Newton method to search the optimal step size, and thus converges faster than MUR. Experiments on synthetic and realworld datasets confirm the efficiency of FGD compared with MUR for optimizing NPAF. Based on NPAF, we develop nonnegative discriminative locality alignment (NDLA). Experiments on face image and handwritten datasets suggest the effectiveness of NDLA in classification tasks and its robustness to image occlusions, compared with representative NMFrelated dimension reduction algorithms. Keep your team informed and on track using the recent updates page. 


