Sparse learning

"We consider it a good principle to explain the phenomena by the simplest hypothesis possible" - Ptolemy

Learning sparse models: theory and applications

from system identification to neural networks

One-day workshop

Wednesday, 16 September, 2020

Organized by Sophie M. Fosson and Diego Regruto - Dipartimento di Automatica e Informatica, Politecnico di Torino, Italy

Supported by IEEE CSS Italy Chapter and IEEE CSS Technical Committee on System Identification and Adaptive Control (TC-SIAC)

In data-driven science, it is fundamental to extract the essential information from data, to avoid redundancies, over-fitting, and undesired high complexity. For this purpose, sparse optimization is exploited to learn parsimonious models: this is "sparse learning".

In the last years, the theory of sparse optimization has been developed within the signal processing community, in particular in the context of compressed sensing. Nowadays, its application is popular in several areas, ranging from linear and non-linear system identification to neural networks and deep learning.

The aim of this workshop is to bring together researchers from signal processing, system identification, and machine learning communities to discuss new challenges of sparse learning, with particular attention to the application of more recent theoretical results to real-world problems.



Program:

  1. 9:00 a.m. "An introduction to sparse learning" by Sophie M. Fosson -Politecnico di Torino (Torino, Italy)
  2. 9:30 a.m. "On the sparse estimation approach to change point detection" by Cristian Rojas - KTH (Stockholm, Sweden)
  3. 10:15 a.m. "Incoherent measurement matrices in compressed sensing and applications to 5G communications" by Cristian Rusu - IIT (Genova, Italy)
  4. 11:10 a.m. "Iterative regularization: general convex case" by Cesare Molinari - IIT (Genova, Italy)
  5. 11:55 a.m. "Regularized and Distributionally Robust Data-Enabled Predictive Control" by Jeremy Coulson - ETH (Zürich, Switzerland)
  6. 2:15 p.m. "Incorporating a priori physical constraints in the sparse identification of nonlinear dynamical systems" by Jean-Christophe Loiseau - ENSAM (Paris, France)
  7. 3:00 p.m. "Learning hidden influences in sparse social networks" by Chiara Ravazzi - CNR-IEIIT (Torino, Italy)
  8. 3:45 p.m. "Adaptive-rate reconstruction of sparse time-varying signals: applications in compressive background Subtraction" by João Mota - Heriot-Watt University (Edinburgh, UK)
  9. 4:40 p.m. "Learning sparse neural topologies with a structure" by Attilio Fiandrotti and Enzo Tartaglione - Università di Torino (Italy)
  10. 5:25 p.m. "Sparse Neural Networks and the Lottery Ticket Hypothesis" by Giulia Fracastoro - Politecnico di Torino (Italy)


Detailed information:

SparseLearning_Workshop_8sep2020.pdf

THANKS TO ALL THE ATTENDEES!

The workshop had an overall participation of more than 100 people. We thank everyone for the interest and for the enthusiasm for sparse learning. The workshop has been a good starting point for future discussions and collaborations. Stay tuned!

Slides and videos of the talks are available HERE

01_Fosson.pdf
01_Fosson.mp4
02_Rojas.pdf
02_Rojas.mp4
03_Rusu.pdf
03_Rusu.mp4
04_Molinari.pdf
04_Molinari.mp4
05_Coulson.pdf
05_Coulson.mp4
06_Loiseau.pdf
06_Loiseau.mp4
07_Ravazzi.pdf
07_Ravazzi.mp4
08_Mota.pptx
08_Mota.mp4
09_FiandrottiTartaglione.pptx
09_FiandrottiTartaglione.mp4
10_Fracastoro.pdf
10_Fracastoro.mp4