Session Chair: Evrim Acar, Simula
08:30–09:20
Borbala Hunyadi, Delft University of Technology
Abstract: Our brain a complex network of interconnected regions which function in a coordinated manner to execute various functions and process information. Mapping the organization of this network in space and time is a major challenge in computational neuroscience and related fields. Consequently, there is a large body of literature describing the brain as a network, including many solutions based on tensor decompositions. However, there are major differences in terms of what is meant by “network” in these works. Therefore, in this talk, I first present different abstractions of brain networks. Subsequently, I explain how these different abstractions lead - via matrix factorizations for computing simpler versions of the models - to different tensor decompositions. More specifically, I will discuss blind source separation of functional neuroimaging data via canonical polyadic decomposition (CPD) and block-term decomposition (BTD); as well as community detection in dynamic networks, again, using CPD and BTD. Along the way I explain underlying model assumptions, requirements on input data organization and preprocessing, and finally, model (i.e. rank) selection considerations. While the choice between CPD and BTD comes from conscious modelling decisions, determining the rank is often an involved process combining theory, background knowledge and experimentation. I will briefly present the extension of a well-known metric (core consistency diagnostic of CPD) to BTD, an algebraic tool to guide rank selection. I will illustrate the discussed brain network modelling solutions with real data experiments of EEG and functional ultrasound data.
Coffee Break 09:30–09:50
09:50–10:40
Irina Gaynanova, University of Michigan
Abstract: Ambulatory blood pressure monitoring (ABPM) is widely used to track blood pressure and heart rate over periods of 24 hours or more. Most existing studies rely on basic summary statistics of ABPM data, such as means or medians, which obscure temporal features like nocturnal dipping and individual chronotypes. To better characterize the temporal features of ABPM data, we propose a novel smooth tensor decomposition method. Built upon traditional low-rank tensor factorization techniques, our method incorporates a smoothing penalty to handle noise and employs an iterative algorithm to impute missing data. We also develop an automatic approach for the selection of optimal smoothing parameters and ranks. We apply our method to ABPM data from patients with concurrent obstructive sleep apnea and type II diabetes. Our method explains temporal components of data variation and outperforms the traditional approach of using summary statistics in capturing the associations between covariates and ABPM measurements. Notably, it distinguishes covariates that influence the over all levels of blood pressure and heart rate from those that affect the contrast between the two.
10.50–11:40
Christos Chatzis, Simula & Oslo Metropolitan University
Abstract: Multi-way datasets (also referred to as higher-order tensors) are commonly analyzed using unsupervised matrix and tensor factorizations to reveal the underlying patterns. When one of the “ways” that the data evolves across is time, the objective of such analyses often becomes the identification and tracking of the underlying evolving patterns. Standard tensor decomposition methods like CP and PARAFAC2 treat time just as another mode - ignoring its inherently sequential nature. Other methods that include appropriate temporal regularization either have no uniqueness guarantees or have too many structural requirements. In this talk, I will discuss why existing methods might be unsuitable for this task and introduce two time-aware tensor factorization methods:
t(emporal)PARAFAC2, which promotes smooth changes across the evolving factors of PARAFAC2 and,
d(ynamical)CMF, which enforces a linear dynamical system (LDS) structure on component trajectories.
With extensive synthetic experiments we compare these methods with the state-of-the-art for the task of uncovering evolving patterns in terms of accuracy, while also highlighting the benefits and limitations with respect to the three essential requirements of analyzing temporal data: (a) time-awareness, (b) structural flexibility, and (c) uniqueness.
Lunch 12:00–13:00
Session Chair: Martin Haardt, Ilmenau University of Technology
13:15–14:05
Lieven De Lathauwer, KU Leuven
Abstract: As is well-known in TRICAP, tensor decompositions have become essential tools for data analysis, extending beyond traditional matrix decompositions. However, decompositions represent only one aspect of applied (linear and multilinear) algebra. The other fundamental aspect is solving sets of (overdetermined) linear equations (in the least squares sense). Multilinear equations, particularly polynomial equations, have received comparatively little attention.
In this talk, we will conceptually explore the significance of polynomial equations in data analysis, signal processing, modeling, and related fields. We will demonstrate a natural connection between polynomial equations and tensor decompositions. Additionally, we will highlight intriguing connections with blind source separation and harmonic retrieval. Finally, we will address some computational aspects and discuss strategies to mitigate the curse of dimensionality.
This is joint work with Nithin Govindarajan and Raphaël Widdershoven.
14:15–15:05
Konstantin Usevich, CRAN
Abstract: This talk is motivated by connections between neural network models and low-rank tensor decompositions. In the first part of the talk, we briefly discuss connections of polynomial neural networks (PNNs) to tensor decompositions and show how to prove identifiability of PNNs using some well-known results on uniqueness of the CPD. The second part of the talk is on ParaTuck-2 decomposition, which is also linked to neural networks with two hidden layers. We report recent results on algorithms for ParaTuck-2 and its symmetric variant, DEDICOM. We show that under the best known uniqueness conditions (Harshman, Lundy, 1996), the ParaTuck-2 decomposition can be reduced to eigenvalue and nullspace computations.
Coffee Break 15:15–15:35
15.35–16:25
Sebastian Miron, CRAN
Abstract: Multidimensional quaternion arrays (often referred to as ``quaternion tensors") and their decompositions have recently gained increasing attention in various fields such as color and polarimetric imaging or video processing. Despite this growing interest, the theoretical development of quaternion tensors remains limited. This paper introduces a novel multilinear framework for quaternion arrays, which extends the classical tensor analysis to multidimensional quaternion data in a rigorous manner. Specifically, we propose a new definition of quaternion tensors as HR-multilinear forms, addressing the challenges posed by the non-commutativity of quaternion multiplication. Within this framework, we establish the Tucker decomposition for quaternion tensors and develop a quaternion Canonical Polyadic Decomposition (Q-CPD). We thoroughly investigate the properties of the Q-CPD, including trivial ambiguities, complex equivalent models, and sufficient conditions for uniqueness. Additionally, we present two algorithms for computing the Q-CPD and demonstrate their effectiveness through numerical experiments. Our results provide a solid theoretical foundation for further research on quaternion tensor decompositions and offer new computational tools for practitioners working with quaternion multiway data.
Dinner 18:30 - 20:30
Coffee Break 09:30–09:50