Code

MERACLE: Constructive layer-wise conversion of a Tensor Train into a MERA

The MERACLE algorithm converts a tensor train into either a Tucker decomposition or into a MERA. The tensor train is 'unfolded' layer by layer into a MERA. The rank-lowering disentanglers in the MERA are obtained through an orthogonal Procrustes problem. A detailed description of the method can be found in MERACLE: Constructive layer-wise conversion of a Tensor Train into a MERA.

https://github.com/kbatseli/MERACLE

Tensor Network MOESP

The Tensor network MOESP algorithm estimates polynomial state space models from measured input-output data. The state space models that can be identified are characterized by a linear state sequence and polynomial input, extending the classical linear time-invariant state space models to a polynomial nonlinearity. These models can also be interpreted as parallel Hammerstein systems. The curse of dimensionality resulting from the exponential number of coefficients is lifted through the use of tensor networks. A detailed description of the method can be found in Tensor network subspace identification of polynomial state space models.

https://github.com/kbatseli/TNMOESP

Tensor Network randomized SVD

This package contains Matlab/Octave code for the computation of a low-rank matrix approximation of a given matrix in the MPO (Matrix Product Operator, also called Tensor Train matrix) format using a randomized matrix algorithm. This package also includes an algorithm for the conversion of a sparse matrix into an MPO-form. A detailed description of the method can be found in Computing low-rank approximations of large-scale matrices with the Tensor Network randomized SVD.

https://github.com/kbatseli/TNrSVD

Tensor Network Kalman filter for recursive MIMO Volterra system identification

This package contains Matlab/Octave code for the recursive identification of MIMO Volterra systems using a Tensor Network Kalman filter. A detailed description of the method can be found in A Tensor Network Kalman filter with an application in recursive MIMO Volterra system identification.

https://github.com/kbatseli/TNKalman

Tensor Network Alternating Linear Scheme for MIMO Volterra system identification (MVMALS)

This package contains Matlab/Octave code for the identification of MIMO Volterra systems using the (Modified) Alternating Linear Scheme (MALS). All coefficients of the Volterra kernels are combined into one tensor that is never explicitly formed. A detailed description of the method can be found in the paper Tensor Network alternating linear scheme for MIMO Volterra system identification.

https://github.com/kbatseli/MVMALS

Tensor-based Kronecker Product Singular Value Decomposition (TKPSVD)

The Tensor-based Kronecker Product Singular Value Decomposition (TKPSVD) decomposes an arbitrary tensor A into a unique linear combination of d Kronecker products. This allows for a very straightforward determination of a low-rank approximation as well as an easy quantification of the relative approximation error. Furthermore, if A is a general symmetric tensor (symmetric, persymmetric, centrosymmetric,...) or has a shifted-index structure (Toeplitz, Hankel, constant rows, constant columns), then the Kronecker factors are guaranteed to inherit this structure. More information can be found in A constructive arbitrary-degree Kronecker product decomposition of tensors.

Matlab/Octave implementations can be found at:

https://github.com/kbatseli/TKPSVD

Symmetric Tensor Eigen-Rank One Iterative Decomposition (STEROID)

The Symmetric Tensor Eigen-Rank-One Iterative Decomposition (STEROID) decomposes an arbitrary symmetric tensor A into a real linear combination of unit-norm symmetric rank-1 terms.

Matlab/Octave implementations can be found at:

https://github.com/kbatseli/STEROID

Tensor Train Rank-1 (TTr1) decomposition

The Tensor Train Rank-1 singular value decomposition (TTr1SVD) decomposes an arbitrary tensor A into a unique linear combination of orthonormal rank-1 terms. The Tensor Train Rank-1 symmetric eigenvalue decomposition (TTr1SED) does the same for a tensor that is symmetric in the last 2 modes of equal dimension. This allows for a very straightforward determination of a low-rank approximation as well as an easy quantification of the approximation error.

Matlab/Octave implementations can be found at:

https://github.com/kbatseli/TTr1SVD

Polynomial Numerical Linear Algebra (PNLA) package

During my PhD I worked on a numerical linear algebra framework for solving problems with multivariate polynomials. Multivariate polynomials are ubiquitous in engineering. Not only are they a natural modelling tool, many parameter estimation problems can also be written as finding the roots of a multivariate polynomial system. Most computational methods in this setting are symbolical. In fact, a whole domain of research called computer algebra sprung into being to develop and study symbolical algorithms for problems with multivariate polynomials. This has somehow prevented the growth of a numerical approach to these problems. My thesis bridges this gap to numerical methods by developing a numerical linear algebra framework that allows us to solve problems with multivariate polynomials such as elimination of variables, root-finding and computing an approximate GCD. The two main tools in this polynomial numerical linear algebra (PLNA) framework are the singular value decomposition (SVD) and rank-revealing QR decomposition. All MATLAB/OCTAVE functions that were written during my PhD are still under development and can be freely downloaded at:

https://github.com/kbatseli/PNLA_MATLAB_OCTAVE

Some preliminary JULIA implementations can be freely downloaded at:

https://github.com/kbatseli/PNLA_JULIA