October 27th 2025, 2:30 p.m. Room 220 Centro Didattico Morgagni, Stefano Sicilia (Post-Doc at Université de Mons)
Title: Improving the robustness of neural ODEs with minimal weight perturbation
Abstract: We propose a method to enhance the stability of a neural ordinary differential equation (neural ODE) by reducing the maximum error growth subsequent to a perturbation of the initial value. Since the stability depends on the logarithmic norm of the Jacobian matrix associated with the neural ODE, we control the logarithmic norm by perturbing the weight matrices of the neural ODE by a smallest possible perturbation (in Frobenius norm). We do so by engaging an eigenvalue optimization problem, for which we propose a nested two-level algorithm. For a given perturbation size of the weight matrix, the inner level computes optimal perturbations of that size, while - at the outer level - we tune the perturbation amplitude until we reach the desired uniform stability bound. We embed the proposed algorithm in the training of the neural ODE to improve its robustness to perturbations of the initial value, as adversarial attacks. Numerical experiments on classical image datasets show that an image classifier including a neural ODE in its architecture trained according to our strategy is more stable than the same classifier trained in the classical way, and therefore, it is more robust and less vulnerable to adversarial attacks.
October 27th 2025, 3:00 p.m. Room 220 Centro Didattico Morgagni, Subhayan Saha (Post-Doc at Université de Mons)
Title: Optimization-based algorithms for computing unique nonnegative matrix and tensor factorizations
Abstract: Given a nonnegative matrix X and a factorization rank r, nonnegative matrix factorization (NMF) approximates the matrix X as the product of a nonnegative matrix W with r columns and a nonnegative matrix H with r rows. NMF has become a standard linear dimensionality reduction technique in data mining and machine learning. Such a factorization is not unique in general and is NP-hard to compute. We first introduce and motivate different non-negative matrix factorization models, with a focus on volume-based geometric criteria under which such a decomposition is unique. We would also discuss algorithms for computing the aforementioned unique matrix factorizations. We extend these uniqueness results to matrix tri-factorization models and also propose a fast algorithm (with convergence guarantees) to solve this problem based on the block majorization minimization framework with extrapolation steps (TITAN) [Hien et al, 2023]. Finally, we discuss a model of tensor decompositions called nonnegative Tucker decompositions (NTD) which generally does not have a unique decomposition and existing algorithms for computing an NTD often suffer from the curse of dimensionality. We show that these results for NMF can be used to break this curse and design significantly more efficient algorithms with uniqueness guarantees.