Exploiting sparsity in Expectation-Maximization for fitting finite mixture models .
Jason Wyse, Trinity College Dublin.
19th of February 2026
Abstract:
Finite mixture models provide a fundamental means of describing heterogeneous data within a probabilistic framework. The main workhorses for fitting finite mixture models are Expectation-Maximization (EM) and variational EM (VEM) algorithms. These are heavily used parameter estimation approaches in statistics and machine learning, with dedicated software packages such as mclust specifically designed to optimize for efficient fitting. These iterative algorithms are based around the core idea of completing observed data with labels. The probability profiles of these labels are computed using the current parameter estimate, and then the profiles are used to update a parameter estimate, with these steps computed in a cycle. The idea explored in this talk is sparsity in EM fitting, stemming from the observation that there is often redundancy in EM computations in the finite mixture context. A modified objective function in a standard EM algorithm can provide competitive results with less computation. Such sparse algorithms successfully assimilate knowledge about the fitting problem on the fly, learning what to learn, and thus could be useful for large datasets or methods like bootstrapping.
Joint work with Silvia D’Angelo and Michael Fop