BIGDATA: Mid-Scale: DA: Collaborative research: Big Tensor Mining: Theory, Scalable Algorithms and Applications.

 
Nikos Sidiropoulos, PI, George Karypis, co-PIPhone: (612) 625-1242
Dept. of ECE, CSci, and DTCFax: (612) 625-2002 
University of MinnesotaEmail: nikos@umn.edu, karypis@cs.umn.edu
Minneapolis, MN, 55455WWW page: www.ece.umn.edu/~nikos

This material is based upon work supported by the National Science Foundation under Grants No. IIS-1247632, IIS-1247489. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

1. GENERAL INFORMATION

1.1. Abstract

Given triplets of facts (subject-verb-object), like ("Washington" "is the capital of" "USA"), can we find patterns, new objects, new verbs, anomalies?  Can we correlate with brain scans, to discover which parts of the brain get activated, say, by tool-like nouns ("hammer"), or action- like verbs ("run")? Similarly, given a who-recommends-what recommendation system, with rich side information (e.g., user-product review texts), can we better explain recommendations, detect clusters and anomalies? 

The data in these applications come in the form of tensors, or coupled tensors and matrices. Tensors are multi-way generalizations of matrices - whereas matrices are (two-way) datasets indexed by two indices (row and column), tensors are indexed by three or more indices. Extremely large and sparse tensors and coupled tensor/matrix data arise in numerous important applications that require the development of algorithms and associated software that can identify the core relations among the different tensor modes, and scale to extremely large datasets. The objective of this project is to develop theory and algorithms for (coupled) sparse and low-rank tensor factorization, and associated scalable software toolkits to make such analysis possible. 
Illustration of multi-way compressed sensing for tensors [Sidiropoulos & Kyrillidis, IEEE SPL, 2012]

The research in the project is centered on three major thrusts. The first is designed to make novel theoretical contributions in the area of coupled tensor factorization, by developing multi-way compressed sensing methods for dimensionality reduction with perfect latent model reconstruction. Methods to handle missing values, noisy input, and coupled data will also be developed.  The second thrust focuses on algorithms and scalability on modern architectures, which will enable the efficient analysis of coupled tensors with millions and billions of non-zero entries, using the map-reduce paradigm, as well as hybrid multicore architectures. An open-source coupled tensor factorization toolbox (HTF- Hybrid Tensor Factorization) will be developed that will provide robust and high-performance implementations of these algorithms. Finally, the third thrust focuses on evaluating and validating the effectiveness of these coupled factorization algorithms on a NeuroSemantics application whose goal is to understand how human brain activity correlates with text reading & understanding by analyzing fMRI and MEG brain image datasets obtained while reading various text passages.

1.2. Keywords

Data mining, map/reduce; read-the-web, tensors, dimensionality reduction, latent factor models, compressed sensing, recommender systems.

1.3. Funding agency

  • NSF, Award Number:  IS-1247632, Duration: 12/01/2012 - 11/30/2016 (Expected)

2. PEOPLE INVOLVED

In addition to the UMN team 
 the following people from CMU work on the project under IIS-1247489:
In addition to the main contributors, the following people are collaborators:

3. RESEARCH

3.1. Project goals

We propose a unified coupled tensor factorization framework to systematically mine such datasets. Unique challenges in these settings include (a) tera and petabyte scaling issues, (b) distributed fault-tolerant computation, (c) large proportions of missing data, and (d) insufficient theory and methods for big sparse tensors. 

We propose to address them via our three research thrusts: i) Motivating Applications: Real-world tera-scale applications, including "read-the-web", NeuroSemantics and large recommendation systems ii) New theory and methods for big sparse tensor and coupled tensor/matrix factorization; and iii) Scalability to tera- and peta-byte data, using the map-reduce paradigm and extending it to multi-core settings.

3.2. Current Results

Our main results so far are as follows:
  • We have developed scalable, parallelizable algorithms that are able to accelerate, up to 200x, any existing Coupled Matrix-Tensor Factorization algorithm, while producing up to 65x sparser results.
  • Using CMTF and the aforementioned tool, we have been able to simultaneously cluster brain regions, nouns and semantic features pertaining to those nouns, into insightful and meaningful groups.
  • We have developed provably correct, scalable methods for PARAFAC decomposition. We developed two new classes of parallel algorithms for big tensor decomposition: PARACOMP for Hadoop-type in-cloud computation; and ADMM-based constrained tensor decomposition in high-performance computing (mesh) architectures. 
  • We devised memory- and computation-efficient parallel algorithms for computing certain products that arise in tensor factorization and dimensionality reduction.
  • We worked on a new statistical approach for data imputation for big tensors that exploits laws of large numbers together with identifiability properties of tensor factorization. 
  • We also worked on non-negative matrix factorization (NMF), producing new uniqueness results and a very fast and efficient decomposition algorithm for symmetric NMF. In parallel, we developed Cramer-Rao lower bounds on parameter estimation accuracy for NMF, revealing in particular the prominent role of sparsity, and enabling putting existing NMF algorithms to a road test. These results will serve as a springboard for the development of truly scalable NMF and non-negative tensor decomposition algorithms for big data. 
  • We developed linear dynamical system models of brain activity from MEG signals recorded in controlled experiments, where the subjects were shown nouns and were requested to answer a yes/no question.
  • We developed a runtime system for MPI programs that enables the efficient and transparent disk-based execution of distributed-memory parallel programs, orchestrating the execution of a large number of MPI processes on much fewer compute nodes, so that the running processes maximize the amount of computation that they perform with the data fetched from the disk. See Big Data Message Passing Interface (BDMPI) at http://glaros.dtc.umn.edu/gkhome/bdmpi/overview
  • We have also developed new methods for clustering very large power-law graphs, capable of quickly creating high quality k-way and any-way clustering of graphs with over 18 million vertices and 261 million edges in 34 seconds.

3.3. Publications

  • Journal
    1.  A.P. Liavas, and N.D. Sidiropoulos, “Parallel Algorithms for Constrained Tensor Factorization via the Alternating Direction Method of Multipliers”, IEEE Trans. on Signal Processing, submitted Aug. 30, 2014. http://arxiv.org/abs/1409.2383

    2. O. Mehanna, K. Huang, B. Gopalakrishnan, A. Konar, and N.D. Sidiropoulos, “Feasible Point Pursuit and Successive Approximation of Non-convex QCQPs”, IEEE Signal Processing Letters, submitted June 11, 2014.

    3. E.E. Papalexakis, T. Mitchell, N.D. Sidiropoulos, C. Faloutsos, P.P. Talukdar, B. Murphy, “Turbo-SMT: Accelerating Coupled Sparse Matrix-Tensor Factorizations by 200x”, Statistical Analysis and Data Mining (SAM) journal, ’Best of SDM 2014’ special issue, submitted May 15, 2014.

    4. E.E. Papalexakis, C. Faloutsos, and N.D. Sidiropoulos, “ParCube: Sparse Parallelizable Tensor Decompositions”, ACM Transactions on Knowledge Discovery from Data, submitted May 11, 2014.

    5. N.D. Sidiropoulos, E.E. Papalexakis, C. Faloutsos,“PArallel RAndomly COMPressed Cubes: A Scalable Distributed Architecture for Big Tensor Decomposition”, IEEE Signal Processing Magazine, Special Issue on Signal Processing for Big Data, pp. 57-70, Sep. 2014. PDF

    6. K. Huang, and N.D. Sidiropoulos, “Putting NMF to the Test: A Tutorial Derivation of Pertinent Cramer-Rao Bounds and Performance Benchmarking”, IEEE Signal Processing Magazine, Special Issue on Source Separation and Applications, pp. 76-86, May 2014. PDF

    7. E.E. Papalexakis, U Kang, C. Faloutsos, N.D. Sidiropoulos, and A. Harpale, “Large Scale Tensor Decompositions: Algorithmic Developments and Applications”, IEEE Data Engineering Bulletin, Special Issue on Social Media and Data Analysis, vol. 36(3):59–66, Sep. 2013. PDF

    8. K. Huang, N.D. Sidiropoulos, and A. Swami, “Non-negative Matrix Factorization Revisited: Uniqueness and Algorithm for Symmetric Decomposition”, IEEE Trans. on Signal Processing, 62(1):211–224, Jan. 2014. PDF  

    9. Dominique LaSalle and George Karypis, “Multi-Threaded Modularity Based Graph Clustering using the Multilevel Paradigm”, IEEE Journal of Parallel and Distributed Computing, 2014 (in press) 

    10. Dominique LaSalle and George Karypis, “MPI for Big Data: New Tricks for an Old Dog”, Parallel Computing, 2014 (in press).

  • Conference
    1. Multi-Threaded Graph Clustering. Dominique LaSalle, George Karypis. 29th International Conference on Data Engineering, BRISBANE, AUSTRALIA, APRIL 8-11 2013.

    2. BDMPI - Conquering BigData with Small Clusters using MPI. Dominique LaSalle, George Karypis. Workshop on Data Intensive Scalable computing Systems, Supercomputing 2013, Nov. 18, 2013, Denver, Colorado.

    3. N.D. Sidiropoulos, E.E. Papalexakis, and C. Faloutsos, “A Parallel Algorithm for Big Tensor Decomposition Using Randomly Compressed Cubes (PARACOMP)”, Proc. IEEE ICASSP 2014, May 4-9, 2014 - Florence, Italy.

    4. E.E. Papalexakis, T. Mitchell, N.D. Sidiropoulos, C. Faloutsos, P.P. Talukdar, B. Murphy, “Turbo-SMT: Accelerating Coupled Sparse Matrix-Tensor Factorizations by 200x”, in Proc. SIAM Conference on Data Mining (SDM), Philadelphia, PA, April 24-26, 2014. Invited to fast-track journal publication in the Statistical Analysis and Data Mining (SAM) journal, ’Best of SDM 2014’ special issue.

    5. J.H. Marcos, and N.D. Sidiropoulos, “Just Compress and Relax: Handling Missing Values in Big Tensor Analysis”, in Proc. 6th International Symposium on Communications, Control, and Signal Processing (ISCCSP), Athens, Greece, May 21-23, 2014.

    6. E.E. Papalexakis, A. Fyshe, N.D. Sidiropoulos, P.P. Talukdar, T. Mitchell, C. Faloutsos, “Good-Enough Brain Model: Challenges, Algorithms and Discoveries in Multi-Subject Experiments”, in Proc. ACM KDD 2014, New York City, Aug. 24-27, 2014.

    7. N. Ravindran, N.D. Sidiropoulos, S. Smith, and G. Karypis, “Memory-Efficient Parallel Computation of Tensor and Matrix Products for Big Tensor Decomposition”, in Asilomar Conference on Signals, Systems, and Computers, November 2-5, 2014, Pacific Grove, CA

    8. A.P. Liavas, and N.D. Sidiropoulos, “Parallel algorithms for large scale constrained tensor decomposition”, submitted to IEEE ICASSP 2015, April 19-24, 2015, Brisbane, Australia.

  • Book chapters
    1. David Anastasiu, Jeremy Iverson, Shaden Smith, and George Karypis, “Big Data Frequent Pattern Mining”, in Frequent Pattern Mining (Editor: Charu Aggarwal and Jawei Han), Springer, 2015 (to appear).

3.4 Computational tools

  1. Big Data Message Passing Interface (BDMPI): http://glaros.dtc.umn.edu/gkhome/bdmpi/overview

3.5 Tutorials

  1. N.D. Sidiropoulos, and E.E. Papalexakis ``Factoring Tensors in the Cloud: A Tutorial on Big Tensor Data Analytics’’, IEEE ICASSP 2014, Florence, Italy, May 5, 2014. http://www.cs.cmu.edu/~epapalex/tutorials/icassp14.html

3.6 Plenaries

  1. N.D. Sidiropoulos, “Tensor Decomposition Theory and Algorithms in the Era of Big Data'', Inaugural plenary lecture at EUSIPCO, Sep. 2, 2014, Lisbon, Portugal. PDF

Last updated: Oct. 3, 2014, by Nikos Sidiropoulos
Comments