MOLE project
Manifold constrained Optimization and LEarning
Manifold constrained Optimization and LEarning
The project MOLE (Manifold constrained Optimization and LEarning) aims at developing and analyzing new numerical algorithms to solve nonlinear tensor least-squares problems and to train deep neural networks on low-parametric Riemannian manifolds.
The project is a collaboration with University of Pisa (PI Leonardo Robol) and GSSI (PI Francesco Tudisco).
It has been financed in the PRIN2022 program by the Italian Minister for University and Research, and started on February, 4 2025.
Description of the project
Today's technologies and connected world provide easy access to vast amounts of data. Fully utilizing this data is essential for the success of modern data-driven scientific computing. As data complexity increases, finding low-dimensional hidden structures becomes important for effective algorithm design. MOLE exploits the representation of high-dimensional data by tensors approximated on a low-dimensional Riemannian manifold, and focuses on the design and analysis of algorithms for supervised learning with neural networks and for tensor least-squares problems.
Aims of the project
MOLE aims at developing and analyzing new numerical algorithms to solve nonlinear tensor and matrix least-squares problems and to train deep neural networks on low-parametric Riemannian manifolds. The novel computational and mathematical framework sees specific application to data compression, network analysis, and the solution of PDEs using tools of numerical linear algebra, tensor analysis, network science, and numerical optimization.
Expected results
MOLE will provide key mathematical and algorithmic tools for the analysis and solution of (a) time integration of high-dimensional PDEs, increasing efficiency while preserving relevant physical quantities; (b) high-dimensional data compression problems, such as dictionary learning, thanks to the efficient computation of constrained matrix and tensor factorizations; (c) neural network-based supervised learning, with new efficient and mathematically sound training algorithms.
Publications
M. Porcelli, G. Seraghiti, Ph. L. Toint, prunAdag: an adaptive pruning-aware gradient method, Computational Optimization and Applications, DOI: 10.1007/s10589-025-00723-7 (2025).
A. Savostianov, N. Guglielmi, M. Schaub, F. Tudisco, Efficient Sparsification of Simplicial Complexes via Local Densities of States, 2025 (arXiv:2502.07558).
S. Schotthöfer, E. Zangrando, G. Ceruti, F. Tudisco, J. Kusch, GeoLoRA: Geometric integration for parameter efficient fine-tuning, International Conference on Learning Representations (ICLR 2025).
E. Zangrando, S. Venturini, F. Rinaldi, F. Tudisco, dEBORA: Efficient Bilevel Optimization-based low-Rank Adaptation, International Conference on Learning Representations (ICLR 2025).
N. Gillis, M. Porcelli, G. Seraghiti, An extrapolated and provably convergent algorithm for nonlinear matrix decomposition with the ReLU function, pp. 1-26, 2025 (arXiv:2503.23832) [code].