Second order methods for optimisation problems in machine learning



2019-2022 Project for the exchange of researchers selected within the frame of the executive Program of Cooperation in the Field of Science and Technology between the Italian Republic and the Republic of Serbia, 2019-2021.

Italian coordinator: Prof. Stefania Bellavia, University of Florence

Serbian coordinator: Prof. Nataša Krklec Jerinkic, University of Novi Sad


The problem we are considering is the minimization of the aggregate loss function which is the sum of a large number of local loss functions. The motivation comes from machine learning, data fitting and stochastic optimization, where the objective function is given in the form of mathematical expectation and approximated with the sample average approximation. Typically first order stochastic methods are used, but they suffer from slow convergence, sensitivity to hyper-parameter tuning and other issues. On the contrary stochastic second-order optimization methods can be more robust in case of ill-conditioned and/or nonconvex problems and less dependent on hyper-parameters.


We focus on second-order stochastic approaches, such as subsampling inexact Newton, cubic regularization, trust-region, with adaptive selection of the sample size and the learning rate.