Blog
2023 CMS Winter Meeting: Final thoughts and congrats to Marco!
December 8, 2023Co-organizing the 2023 CMS Winter Meeting together with Alina Stancu and François Bergeron, the scientific organizing committee and the CMS staff has been a real blast. The success of this meeting makes me proud to be part of the Canadian mathematical community: we are so vibrant, diverse, and friendly! I am also grateful to all the Department members and colleagues from other institutions (faculty, postdocs, and students) who enthusiastically participated in this meeting by co-organizing sessions, offering minicourses, delivering talks, and presenting posters, hence showcasing our excellent work at the national level, and contributing to making this event the success that it was. I would also like to thank the Faculty of Arts and Science and our Department for financially supporting the event and our students.
And congratulations to Marco Mignacca, a former undergraduate student from Concordia University (now a grad student at McGill) who received the CMS Student Committee Award for the best poster in the AARMS/CMS Poster competition! Marco was awarded a prestigious NSERC undergraduate student summer fellowship to collaborate with Jason Bramburger and me on the award-winning research poster in the summer of 2023.
For more info, see our Departmental Notice (by Christopher Plenzich):
Model-adapted Fourier sampling for generative compressed sensing
October 24, 2023A new paper in collaboration with A. Berk, Y. Plan, M. Scott, X. Sheng and O. Yilmaz has just been accepted to the NeurIPS 2023 workshop "Deep Learning and Inverse Problems"!
https://arxiv.org/abs/2310.04984
In this work, we provide new recovery guarantees for compressed sensing with deep generative priors and Fourier-type measurements, motivated by applications in computational imaging. Our new theoretical results show that adapting the sampling scheme to the generative model yields substantially improved sample complexity than random uniform sampling. Our theory is validated by numerical experiments on the CelebA dataset.
Numerical analysis notebooks
September 27, 2023I have finally overcome procrastination and started learning Python.
I am doing this by developing a series of Jupyter notebooks for my Numerical Analysis course MAST 334 at Concordia (teaching is learning!). These notebooks illustrate fundamental concepts of numerical analysis such as round-off errors, root-finding methods, interpolation, function approximation, and numerical methods for differential equations. The main Python modules employed are NumPy, Matplotlib, and SciPy.
I am now in the process of developing these notebooks as the Fall term unfolds. If you want to follow me through this process, check out my GitHub repo:
https://github.com/simone-brugiapaglia/numerical-analysis-notebooks
Comments and feedback of any type are very welcome!
New podcast interview (Perceptional Imprints and 4TH SPACE)
August 31, 2023I had the pleasure to be interviewed by Abhishek Kyalkar, host of the podcast Perceptional Imprints and Concordia alumnus. In our conversation, we talk about my research in mathematics of data science, the future of artificial intelligence, how to cultivate love for mathematics, the academic job market, memes of cats reading books, and much more!
This was a really fun interview in a wonderful (professionally equipped!) location offered by Concordia's 4TH SPACE. Check it out!
CMS Winter meeting 2023
July 20, 2023I am excited to serve as Scientific Director of the 2023 Winter meeting of the Canadian Mathematical Society (CMS), together with Alina Stancu and François Bergeron! The meeting will take place in Montréal from December 1-4, 2023.
https://www.winter23.cms.math.ca/
This event is held every six months across Canada and showcases the excellent research work of the Canadian mathematical community.
The call for session is now open! Consider submitting a proposal at:
https://www.winter23.cms.math.ca/callforsessions (Scientific sessions)
https://www.winter23.cms.math.ca/call-for-education (Education sessions)
Generalization Limits of Graph Neural Networks in Identity Effects Learning
July 17, 2023A new preprint in collaboration with Giuseppe Alessio D'Inverno and Mirco Ravanelli is on the arXiv!
https://arxiv.org/abs/2307.00134
We establish new theorems about fundamental generalization limits of Graph Neural Networks (GNNs) for the problem of identity effect learning (i.e., classifying whether two objects are identical or not). In the paper, we prove impossibility theorems for GNNs when the identity effect information is encoded at the node feature level. Conversely, we show that GNNs are able to learn identity effects based on graph topological information using an approach based on the Weisfeiler-Lehman coloring test. The theory is validated and complemented by extensive numerical illustrations.
A GitHub repository with code needed to reproduce our numerical results, curated by G.A. D'Inverno, can be found at the following link:
The Square Root LASSO and the "tuning trade off"
April 4, 2023Aaron Berk, Tim Hoheisel and I have recently submitted a new paper on the Square Root LASSO, available on the arXiv at
https://arxiv.org/abs/2303.15588
Building on our recent work LASSO Reloaded, we propose a variational analysis of the Square Root LASSO, showing regularity results for the solution map and studying tuning parameter sensitivity. More specifically, we identify assumptions that guarantee well-posedness and Lipschitz stability, with explicit upper bounds. Our investigation reveals the presence of a "tuning trade off" and suggests that the robustness of the Square Root LASSO's optimal tuning to measurement noise comes at the price of increased sensitivity of the solution map to the tuning parameter itself.
The greedy side of the LASSO
March 8, 2023Sina M.-Taheri and I have just submitted a new paper, available on the arXiv at
https://arxiv.org/abs/2303.00844
In it, we propose a class of greedy algorithms for weighted sparse recovery by considering new loss function-based generalizations of Orthogonal Matching Pursuit (OMP). We show that greedy selection rules associated with popular loss functions such as those of the LASSO (Least Absolute Shrnkage and Selection Operator), Square Root LASSO (SR-LASSO) and Least Absolute Deviations LASSO (LAD-LASSO) admit explicitly computable and simple formulas. Moreover, we numerically demonstrate the effectiveness of the proposed algorithms and empirically show that they inherit desirable characteristics from the corresponding loss functions, such as SR-LASSO's noise-blind optimal parameter tuning and LAD-LASSO's fault tolerance. In doing so, our study sheds new light on the connection between greedy sparse recovery and convex relaxation.