The 1W-MINDS Seminar was founded in the early days of the COVID-19 pandemic to mitigate the impossibility of travel. We have chosen to continue the seminar since to help form the basis of an inclusive community interested in mathematical data science, computational harmonic analysis, and related applications by providing free access to high quality talks without the need to travel. In the spirit of environmental and social sustainability, we welcome you to participate in both the seminar, and our slack channel community! Zoom talks are held on Thursdays at 2:30 pm New York time. To find and join the 1W-MINDS slack channel, please click here.
Current Organizers (September 2025 - May 2026): Ben Adcock (Simon Fraser University), March Boedihardjo (Michigan State University), Hung-Hsu Chou (University of Pittsburgh), Diane Guignard (University of Ottawa), Longxiu Huang (Michigan State University), Mark Iwen (Principal Organizer, Michigan State University), Siting Liu (UC Riverside), Kevin Miller (Brigham Young University), and Christian Parkinson (Michigan State University).
Most previous talks are on the seminar YouTube channel. You can catch up there, or even subscribe if you like.
To sign up to receive email announcements about upcoming talks, click here.
To join MINDS slack channel, click here.
Passcode: the smallest prime > 100
Why does gradient descent, when run on highly over-parameterized models, prefer simple solutions? For matrices, this implicit bias toward low-rank structure is well established, but extending such results to tensors is much harder. In this talk, I will present our recent work that establishes implicit regularization in tensor factorizations under gradient descent. We focus on the tubal tensor product and the associated notion of tubal rank, motivated by applications to image data. Our results show that, in overparameterized settings, small random initialization plays a key role: it steers gradient descent toward solutions of low tubal rank. Alongside the theory, I will present simulations that illustrate how these dynamics shape the optimization trajectory. This work bridges a gap between the matrix and tensor cases and connects implicit regularization to a broader class of learning problems.
It is joint work with Santhosh Karnik, Mark Iwen, and Felix Krahmer.
High-dimensional datasets often reside on a low-dimensional geometrical manifold. Manifold learning algorithms aim to retrieve this underlying structure by mapping the data into lower dimensions while minimizing some measure of local (and possibly global) distortion incurred by the map. However, attaining small (or even finite) distortion is not guaranteed, or even possible, with most manifold learning schemes. Bottom-up approaches address this problem by first constructing low-distortion low-dimensional local views of the data that are provably reliable, and then integrating them together to obtain a global embedding without inducing too much additional distortion.
In our work, we investigate the following questions:
1. How to obtain low-distortion low-dimensional local views of high-dimensional data that are robust to noise?
2. How to integrate these local views in an efficient manner to produce a low-dimensional global embedding with distortion guarantees?
3. How does the distortion incurred in the low-dimensional embedding impacts the performance of the downstream tasks?
This talk covers joint work with Dhruv Kohli, Gal Mishne, Sawyer Robertson, Johannes Nieuwenhuis, and Devika Narain.