Blog

Year > 202420232022202120202019

2023 CMS Winter Meeting: Final thoughts and congrats to Marco!

December 8, 2023

Co-organizing the 2023 CMS Winter Meeting together with Alina Stancu and François Bergeron, the scientific organizing committee and the CMS staff has been a real blast. The success of this meeting makes me proud to be part of the Canadian mathematical community: we are so vibrant, diverse, and friendly! I am also grateful to all the Department members and colleagues from other institutions (faculty, postdocs, and students) who enthusiastically participated in this meeting by co-organizing sessions, offering minicourses, delivering talks, and presenting posters, hence showcasing our excellent work at the national level, and contributing to making this event the success that it was. I would also like to thank the Faculty of Arts and Science and our Department for financially supporting the event and our students.

And congratulations to Marco Mignacca, a former undergraduate student from Concordia University (now a grad student at McGill) who received the CMS Student Committee Award for the best poster in the AARMS/CMS Poster competition! Marco was awarded a prestigious NSERC undergraduate student summer fellowship to collaborate with Jason Bramburger and me on the award-winning research poster in the summer of 2023.

For more info, see our Departmental Notice (by Christopher Plenzich):

https://www.concordia.ca/cunews/artsci/math-stats/2023/12/08/successful-2023-canadian-mathematical-society-winter-meeting-led.html?c=/artsci/math-stats 

A very happy moment at the banquet of the 2023 CMS Winter Meeting with my fellow scientific co-directors François Bergeron (center left) and Alina Stancu (center right), the CMS president David Pike (left) and the CMS executive director Termeh Kousha (right).
Marco Mignacca (center), former Concordia undergraduate co-supervised by Jason Bramburger and me, getting the CMS Student Committee Award.

Model-adapted Fourier sampling for generative compressed sensing 

October 24, 2023

A new paper in collaboration with A. Berk, Y. Plan, M. Scott, X. Sheng and O. Yilmaz has just been accepted to the NeurIPS 2023 workshop "Deep Learning and Inverse Problems"

https://arxiv.org/abs/2310.04984 

In this work, we provide new recovery guarantees for compressed sensing with deep generative priors and Fourier-type measurements, motivated by applications in computational imaging. Our new theoretical results show that adapting the sampling scheme to the generative model yields substantially improved sample complexity than random uniform sampling. Our theory is validated by numerical experiments on the CelebA dataset.

Uniform vs. model-adapted Fourier sampling. We compare the compressive recovery of deep network-generated images from the CelebA dataset as a function of the sampling rate (defined as the number of measurements divided by the number of pixels) for uniform and model-adapted Fourier sampling. To achieve successful recovery, uniform sampling needs 6.25% sampling rate. On the other hand, model-adapted Fourier sampling only requires 0.098% sampling rate. For more details, see Figure 1 of the paper. Numerical experiment by Xia Sheng. 

Numerical analysis notebooks

September 27, 2023

I have finally overcome procrastination and started learning Python.

I am doing this by developing a series of Jupyter notebooks for my Numerical Analysis course MAST 334 at Concordia (teaching is learning!). These notebooks illustrate fundamental concepts of numerical analysis such as round-off errors, root-finding methods, interpolation, function approximation, and numerical methods for differential equations. The main Python modules employed are NumPy, Matplotlib, and SciPy.

I am now in the process of developing these notebooks as the Fall term unfolds. If you want to follow me through this process, check out my GitHub repo:

https://github.com/simone-brugiapaglia/numerical-analysis-notebooks 

Comments and feedback of any type are very welcome!

The image illustrates the Julia set, a fractal generated by a rule based on the convergence properties of Newton's method. This fractal is created in Notebook 5, entitled "Going off on a tangent with Newton's method".

New podcast interview (Perceptional Imprints and 4TH SPACE)

August 31, 2023

I had the pleasure to be interviewed by Abhishek Kyalkar, host of the podcast Perceptional Imprints and Concordia alumnus. In our conversation, we talk about my research in mathematics of data science, the future of artificial intelligence, how to cultivate love for mathematics, the academic job market, memes of cats reading books, and much more! 

This was a really fun interview in a wonderful (professionally equipped!) location offered by Concordia's 4TH SPACE. Check it out! 

CMS Winter meeting 2023

July 20, 2023

I am excited to serve as Scientific Director of the 2023 Winter meeting of the Canadian Mathematical Society (CMS), together with Alina Stancu and François Bergeron! The meeting will take place in Montréal from December 1-4, 2023. 

https://www.winter23.cms.math.ca/ 

This event is held every six months across Canada and showcases the excellent research work of the Canadian mathematical community. 

The call for session is now open! Consider submitting a proposal at:

Generalization Limits of Graph Neural Networks in Identity Effects Learning

July 17, 2023

A new preprint in collaboration with Giuseppe Alessio D'Inverno and Mirco Ravanelli is on the arXiv!

https://arxiv.org/abs/2307.00134 

We establish new theorems about fundamental generalization limits of Graph Neural Networks (GNNs) for the problem of identity effect learning (i.e., classifying whether two objects are identical or not). In the paper, we prove impossibility theorems for GNNs when the identity effect information is encoded at the node feature level. Conversely, we show that GNNs are able to learn identity effects based on graph topological information using an approach based on the Weisfeiler-Lehman coloring test. The theory is validated and complemented by extensive numerical illustrations.

A GitHub repository with code needed to reproduce our numerical results, curated by G.A. D'Inverno, can be found at the following link:

https://github.com/AleDinve/gnn_identity_effects 

Classification of symmetric dicyclic graphs. One of the case studies of the paper is the classification of dicyclic graphs (i.e., graphs formed by linking two cycles). Identifying whether a dicyclic graph is symmetric (i.e., formed by cycles of the same length) is an example of identity effect on graphs. To study generalization properties of graph neural networks in this context, we consider the extrapolation task, where the test set contains graphs formed by more nodes than those in the training set. For more details, see Figure 9 of the paper. Visualization by G.A. D'Inverno.

The Square Root LASSO and the "tuning trade off"

April 4, 2023

Aaron Berk, Tim Hoheisel and I have recently submitted a new paper on the Square Root LASSO, available on the arXiv at 

https://arxiv.org/abs/2303.15588

Building on our recent work LASSO Reloaded, we propose a variational analysis of the Square Root LASSO, showing regularity results for the solution map and studying tuning parameter sensitivity. More specifically, we identify assumptions that guarantee well-posedness and Lipschitz stability, with explicit upper bounds. Our investigation reveals the presence of a "tuning trade off" and suggests that the robustness of the Square Root LASSO's optimal tuning to measurement noise comes at the price of increased sensitivity of the solution map to the tuning parameter itself.

The "tuning trade off": LASSO vs. Square Root LASSO. The figure illustrates the local Lipschitz behaviour of the (unconstrained) LASSO (in blue) and of the Square Root LASSO (in red). Dashed lines represent the solution map's variation in Euclidean norm as a function of the tuning parameter. Our Lipschitz bounds are plotted using solid lines. For more details, see Figure 2 in the paper. Numerics by Aaron Berk.

The greedy side of the LASSO

March 8, 2023

Sina M.-Taheri and I have just submitted a new paper, available on the arXiv at 

https://arxiv.org/abs/2303.00844 

In it, we propose a class of greedy algorithms for weighted sparse recovery by considering new loss function-based generalizations of Orthogonal Matching Pursuit (OMP). We show that greedy selection rules associated with popular loss functions such as those of the LASSO (Least Absolute Shrnkage and Selection Operator), Square Root LASSO (SR-LASSO) and Least Absolute Deviations LASSO (LAD-LASSO) admit explicitly computable and simple formulas. Moreover, we numerically demonstrate the effectiveness of the proposed algorithms and empirically show that they inherit desirable characteristics from the corresponding loss functions, such as SR-LASSO's noise-blind optimal parameter tuning and LAD-LASSO's fault tolerance. In doing so, our study sheds new light on the connection between greedy sparse recovery and convex relaxation.

Robustness of parameter tuning to unknown noise for SR-LASSO-based OMP. The solution accuracy obtained via SR-LASSO based OMP is plotted as a function of the tuning parameter λ for different levels of measurement noise. Like in the SR-LASSO case, the optimal choice of λ is independent of the noise level. For more details, see Figure 1 of the paper. Numerics by Sina M.-Taheri.

Year > 202420232022202120202019