The 1W-MINDS Seminar was founded in the early days of the COVID-19 pandemic to mitigate the impossibility of travel. We have chosen to continue the seminar since to help form the basis of an inclusive community interested in mathematical data science, computational harmonic analysis, and related applications by providing free access to high quality talks without the need to travel. In the spirit of environmental and social sustainability, we welcome you to participate in both the seminar, and our slack channel community! Zoom talks are held on Thursdays at 2:30 pm New York time. To find and join the 1W-MINDS slack channel, please click here.
Current Organizers (September 2025 - May 2026): Ben Adcock (Simon Fraser University), March Boedihardjo (Michigan State University), Hung-Hsu Chou (University of Pittsburgh), Diane Guignard (University of Ottawa), Longxiu Huang (Michigan State University), Mark Iwen (Principal Organizer, Michigan State University), Siting Liu (UC Riverside), Kevin Miller (Brigham Young University), and Christian Parkinson (Michigan State University).
Most previous talks are on the seminar YouTube channel. You can catch up there, or even subscribe if you like.
To sign up to receive email announcements about upcoming talks, click here.
To join MINDS slack channel, click here.
Passcode: the smallest prime > 100
In this era of big-data, we must adapt our algorithms to handle large datasets. One obvious issue is that the number of floating-point operations (flops) increases as the input size increases, but there are many less obvious issues as well, such as the increased communication cost of moving data between different levels of computer memory. Randomization is increasingly being used to alleviate some of these issues, as those familiar with random mini-batch sampling in machine learning are well aware of. This talk goes into some specific examples of using randomization to improve algorithms. We focus on special classes of structured random dimensionality reduction, including the count sketch, tensorSketch, Kronecker fast Johnson-Lindenstrauss sketch, and pre-conditioned sampling. These randomized techniques can then be applied to speeding up the classical Lloyd's algorithm for K-means and for computing tensor decompositions, for example. If time permits, we will also show extensions to optimization, including a gradient-free method that uses random finite differences.