Speakers
Audrey Repetti -- Heriot-Watt University, Edinburgh, UK
Nelly Pustelnik -- ENS Lyon, CNRS, France
Jean-Christophe Pesquet -- CentraleSupélec, University Paris-Saclay, France
Imaging sciences are ubiquitous to assist experts worldwide addressing fundamental questions across observational sciences, biology, medicine, security, astronomy, and beyond. Since the early 2000s, signal and image processing has been significantly shaped by two major trends: sparsity-powered proximal algorithms and deep learning. The former rely on a clever integration of variational formalism and optimization schemes, while the latter hinges on intricately designed neural network architectures. Both approaches have demonstrated high performance across various applications, with deep learning often surpassing pure optimization methods in practical settings. However, for many decision-making processes, optimization methods may remain preferred because of their strong theoretical guarantees for generating reliable solutions. More recently, there has been a surge in hybrid methods combining optimization and deep learning, reaching performance levels at least comparable to traditional deep learning, while providing theoretical guarantees and interpretability. In an era where both proximal algorithms and deep learning have reached advanced maturity and complexity levels, there arises a valuable opportunity to investigate the interplay between these methodological families. This tutorial will aim to show that a unified framework can encapsulate these four important classes of methods to solve inverse imaging problems: (i) variational methods powered by proximal algorithms, (ii) end- to-end neural networks, (iii) unfolded neural networks, and (iv) plug-and-play/implicit prior methods.