Blog
LASSO reloaded
May 17, 2022I am quite excited to announce a new arXiv preprint written in collaboration with Aaron Berk (McGill) and Tim Hoheisel (McGill):
https://arxiv.org/abs/2205.06872
We provide a variational analysis of the unconstrained formulation of the LASSO problem, ubiquitous in statistical learning, signal processing, and inverse problems. In particular, we establish smoothness results for the optimal value as well as Lipschitz properties of the optimal solution as functions of the measurement vector and the tuning parameter. Moreover, we show how to apply the proposed variational analysis in the context of compressed sensing with subgaussian measurements. Our theoretical findings are validated by numerical experiments.
Interview for Concordia News: How to make AI ‘more intelligent’
April 5, 2022I had the pleasure to be interviewed by Elizabeth Faure for Concordia News. In the interview, I talk about the importance of developing new theoretical guarantees for deep learning in order to deploy safer and more reliable AI technologies. In particular, I present my recent work in collaboration with Ben Adcock, Nick Dexter, and Sebastian Moraga on practical existence theorems for deep neural networks to appear in the Proceedings of Machine Learning Research.
You ca read the interview at this link:
I would also like to thank Taylor Tower for her assistance through the process.
On efficient algorithms for computing near-best polynomial approximations to high-dimensional, Hilbert-valued functions from limited samples
March 31, 2022A new preprint in collaboration with Ben Adcock, Nick Dexter, and Sebastian Moraga is out!
Our paper addresses the following question: are there robust, efficient algorithms for computing approximations to finite- or infinite-dimensional, holomorphic and Hilbert-valued functions from limited samples that achieve best s-term rates?
We answer this in the affirmative by introducing algorithms and theoretical guarantees that assert exponential or algebraic rates of convergence, along with robustness to sampling, algorithmic, and physical discretization errors. Our work involves several significant developments of existing techniques, including a novel restarted primal-dual iteration for solving weighted l1-minimization problems in Hilbert spaces. Our theory is supplemented by numerical experiments demonstrating the practical efficacy of these algorithms.
Check out our preprint on the arXiv! https://arxiv.org/abs/2203.13908
Book officially published
March 5, 2022I am proud to announce that the book "Sparse Polynomial Approximation of High-dimensional Functions" by Ben Adcock, Clayton Webster, and me has now been officially published by SIAM. You can
Enjoy! And if you have any thoughts on the book you'd like to share with me, do not hesitate to write me an email.
Upcoming book! Sparse Polynomial Approximation of High-Dimensional Functions
December 16, 2021I am incredibly excited to announce that the book Sparse Polynomial Approximation of High-dimensional Functions, co-authored by Ben Adcock (Simon Fraser University), Clayton Webster (University of Texas) and me is really close to being published for the SIAM CSE book series!
Our book provides an in-depth treatment of sparse polynomial approximation methods. These have emerged as useful tools for various high-dimensional approximation tasks arising in a range of applications in computational science and engineering. It begins with a comprehensive overview of best s-term polynomial approximation theory for holomorphic, high-dimensional functions, as well as a detailed survey of applications to parametric differential equations. It then describes methods for computing sparse polynomial approximations, focusing on least squares and compressed sensing techniques.
Want to learn more about or preorder our book? Visit www.sparse-hd-book.ca!
Back to the classroom!
September 7, 2021Today is a historical day. :)
539 days after the closure of Concordia's campuses on March 17, 2021, we're back to in-person teaching. For me, this transition is marked by teaching Numerical Analysis. I could not have desired a better course for my return to the classroom! I taught my first lecture to a hybrid audience: some students were in the classroom, others were watching the live-streamed lecture online.
1W-MINDS Seminar: The curse of dimensionality and the blessings of sparsity and Monte Carlo sampling
May 14, 2021Approximating multi-variate functions from pointwise observations is a ubiquitous task in data science and scientific computing. This task is made intrinsically difficult by the presence of four contemporary challenges: (i) the target function is usually defined over a high- or infinite-dimensional domain; (ii) generating samples can be very expensive; (iii) samples are corrupted by unknown sources of errors; (iv) the target function might take values in a function space.
On May 13 I gave a 1W-MINDS Seminar on how to overcome these four challenges thanks to the "blessings" of sparsity and Monte Carlo sampling, entitled
The curse of dimensionality and the blessings of sparsity and Monte Carlo sampling: From polynomial to deep neural network approximation in high dimensions
Did you miss it? No worries, you can find a recording of my talk on YouTube!
I would like to thank again the 1W-MINDS organizers and, in particular, Mark Iwen for the kind invitation.
ICERM Hot Topics Workshop: Safety and Security of Deep Learning
April 14, 2021The ICERM Hot Topics Workshop "Safety and Security of Deep Learning" I co-organized with Ben Adcock (SFU), Anders Hansen (Cambridge), and Clayton Webster (U Texas) took place a few days ago (April 10-11) and was a great success!
We listened to outstanding and thought-provoking talks by eight high-caliber speakers: Gitta Kutyniok (LMU Munich), Aleksander Mądry (MIT), Ivan Tyukin (U Leicester), Ronald DeVore (Texas A&M), Cynthia Rudin (Duke), Rachel Cummings (Columbia), Genevera Allen (Rice), and Emmanuel Candès (Stanford). The talks spanned a wide range of topics related to deep learning, including robustness to adversarial attacks, interpretability, expressibility and approximation theory, differential privacy, reliability and uncertainty quantification of predictions.
The workshop had more than 100 registered virtual attendees from all over the world. The talks generated many interesting discussions that will potentially open new research avenues towards making deep learning a safer and more secure technology.
Slides and videos of the event will be made available at
MASSA Podcast: Mathematics of Data Science and Deep Learning
February 7, 2021I had the pleasure to be a guest of the MASSA Podcast. This podcast is an excellent initiative of the Mathematics, Actuarial, and Statistics Student Association (aka MASSA) of Concordia University. I have been a regular listener of the podcast since its launch in Fall 2020. It's been a great way to stay connected with students and colleagues from the Department during this period of social distancing. I'm quite proud to be part of this series!
In the episode, we talk about the Mathematics of Data Science, Compressed Sensing, and Deep Learning. Check it out here!
Invariance, encodings, and generalization: learning identity effects with neural networks
January 21, 2021Matthew Liu (Concordia), Paul Tupper (SFU) and I have just submitted a new paper!
We provide a theoretical framework in which we can rigorously prove that learning algorithms satisfying simple criteria are not able to generalize so-called identity effects outside the training set. Identity effects are used to determine whether two components of an object are identical or not and arise often in language and in cognitive science. We show that a broad class of learning algorithms including deep feedforward neural networks trained via gradient-based algorithms (such as SGD or Adam) satisfy our criteria, dependent on the encoding of inputs. In some broader circumstances we also provide adversarial examples able to "fool" the learning algorithm. Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs. Our experiments can be reproduced using this code.