Blog

Year > 202420232022202120202019

Upcoming book! Sparse Polynomial Approximation of High-Dimensional Functions

December 16, 2021

I am incredibly excited to announce that the book Sparse Polynomial Approximation of High-dimensional Functions, co-authored by Ben Adcock (Simon Fraser University), Clayton Webster (University of Texas) and me is really close to being published for the SIAM CSE book series!

Our book provides an in-depth treatment of sparse polynomial approximation methods. These have emerged as useful tools for various high-dimensional approximation tasks arising in a range of applications in computational science and engineering. It begins with a comprehensive overview of best s-term polynomial approximation theory for holomorphic, high-dimensional functions, as well as a detailed survey of applications to parametric differential equations. It then describes methods for computing sparse polynomial approximations, focusing on least squares and compressed sensing techniques.

Want to learn more about or preorder our book? Visit www.sparse-hd-book.ca!

The cover of our upcoming book!

Back to the classroom!

September 7, 2021

Today is a historical day.  :)

539 days after the closure of Concordia's campuses on March 17, 2020, we're back to in-person teaching. For me, this transition is marked by teaching Numerical Analysis. I could not have desired a better course for my return to the classroom! I taught my first lecture to a hybrid audience: some students were in the classroom, others were watching the live-streamed lecture online. 

A screenshot of my first hybrid lecture. 

1W-MINDS Seminar: The curse of dimensionality and the blessings of sparsity and Monte Carlo sampling 

May 14, 2021

Approximating multi-variate functions from pointwise observations is a ubiquitous task in data science and scientific computing. This task is made intrinsically difficult by the presence of four contemporary challenges: (i) the target function is usually defined over a high- or infinite-dimensional domain; (ii) generating samples can be very expensive; (iii) samples are corrupted by unknown sources of errors; (iv) the target function might take values in a function space. 

On May 13 I gave a 1W-MINDS Seminar on how to overcome these four challenges thanks to the "blessings" of sparsity and Monte Carlo sampling, entitled

The curse of dimensionality and the blessings of sparsity and Monte Carlo sampling: From polynomial to deep neural network approximation in high dimensions

Did  you miss it? No worries, you can find a recording of my talk on YouTube

I would like to thank again the 1W-MINDS organizers and, in particular, Mark Iwen for the kind invitation.

ICERM Hot Topics Workshop: Safety and Security of Deep Learning 

April 14, 2021

The ICERM Hot Topics Workshop "Safety and Security of Deep Learning" I co-organized with Ben Adcock (SFU), Anders Hansen (Cambridge), and Clayton Webster (U Texas) took place a few days ago (April 10-11) and was a great success! 

We listened to outstanding and thought-provoking talks by eight high-caliber speakers: Gitta Kutyniok (LMU Munich), Aleksander Mądry (MIT), Ivan Tyukin (U Leicester), Ronald DeVore (Texas A&M), Cynthia Rudin (Duke), Rachel Cummings (Columbia), Genevera Allen (Rice), and Emmanuel Candès (Stanford). The talks spanned a wide range of topics related to deep learning, including robustness to adversarial attacks, interpretability, expressibility and approximation theory, differential privacy, reliability and uncertainty quantification of predictions. 

The workshop had more than 100 registered virtual attendees from all over the world. The talks generated many interesting discussions that will potentially open new research avenues towards making deep learning a safer and more secure technology. 

Slides and videos of the event will be made available at

https://icerm.brown.edu/events/htw-21-ssdl/ 

MASSA Podcast: Mathematics of Data Science and Deep Learning 

February 7, 2021

I had the pleasure to be a guest of the MASSA Podcast. This podcast is an excellent initiative of the Mathematics, Actuarial, and Statistics Student Association (aka MASSA) of Concordia University. I have been a regular listener of the podcast since its launch in Fall 2020. It's been a great way to stay connected with students and colleagues from the Department during this period of social distancing. I'm quite proud to be part of this series!

In the episode, we talk about the Mathematics of Data Science, Compressed Sensing, and Deep Learning. Check it out here!

Invariance, encodings, and generalization: learning identity effects with neural networks

January 21, 2021

Matthew Liu (Concordia), Paul Tupper (SFU) and I have just submitted a new paper!

We provide a theoretical framework in which we can rigorously prove that learning algorithms satisfying simple criteria are not able to generalize so-called identity effects outside the training set. Identity effects are used to determine whether two components of an object are identical or not and arise often in language and in cognitive science. We show that a broad class of learning algorithms including deep feedforward neural networks trained via gradient-based algorithms (such as SGD or Adam) satisfy our criteria, dependent on the encoding of inputs. In some broader circumstances we also provide adversarial examples able to "fool" the learning algorithm. Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs. Our experiments can be reproduced using this code.

Learning model used to identify pairs of identical digits. The model is obtained by concatenating a computer vision (CV) model and an identity effect (IE) model. From right to left: The model takes a pair of images as input (right); then, the CV model classifies them or, equivalently, encodes them as two 10-dimensional probability vectors (center); finally, the IE model assigns a rating from 0 to 1 to the pair of probability (or encoding) vectors to identify whether the images represent identical digits or not (left). In our paper, we show that the ability of the IE model to recognize identical pairs improves when the CV model is undertrained and decreases when the CV model is optimally trained. This is in accordance with our theory.

Year > 202420232022202120202019