The 1W-MINDS Seminar was founded in the early days of the COVID-19 pandemic to mitigate the impossibility of travel. We have chosen to continue the seminar since to help form the basis of an inclusive community interested in mathematical data science, computational harmonic analysis, and related applications by providing free access to high quality talks without the need to travel. In the spirit of environmental and social sustainability, we welcome you to participate in both the seminar, and our slack channel community! Zoom talks are held on Thursdays at 2:30 pm New York time. To find and join the 1W-MINDS slack channel, please click here.
Current Organizers (September 2025 - May 2026): Ben Adcock (Simon Fraser University), March Boedihardjo (Michigan State University), Hung-Hsu Chou (University of Pittsburgh), Diane Guignard (University of Ottawa), Longxiu Huang (Michigan State University), Mark Iwen (Principal Organizer, Michigan State University), Siting Liu (UC Riverside), Kevin Miller (Brigham Young University), and Christian Parkinson (Michigan State University).
Most previous talks are on the seminar YouTube channel. You can catch up there, or even subscribe if you like.
To sign up to receive email announcements about upcoming talks, click here.
To join MINDS slack channel, click here.
Passcode: the smallest prime > 100
Sparse approximation has proven effective at approximating quantities of interest and parameter-to-solution maps found in parametric partial differential equation (PDE) modeling scenarios arising in applications such as uncertainty quantification (UQ). Our previous work demonstrates the existence of efficient algorithms for approximating solutions to parameterized elliptic PDEs with smooth parametric dependence. In particular, a method based on l1-minimization was shown to achieve exponential and algebraic rates of convergence in the number of samples m in the finite and infinite dimensional settings, respectively, with cost that only grows algebraically in m. In this talk we discuss recent progress in extending these techniques to more challenging modeling scenarios such as time dependent problems and nonlinear PDEs such as the Navier-Stokes-Brinkman and Boussinesq equations in mixed variational formulation. We also provide comparison with modern neural network-based approximation schemes.
Momentum-based methods are highly empirically successful in machine learning and theoretically well-understood in convex optimization and some similar settings. We illustrate recent progress in understanding momentum methods beyond the convex setting: Sufficient conditions for their success and applications where they may fail to achieve superior performance. This is joint work with Kanan Gupta, Jonathan Siegel, Patrick Dondl, Akwum Onwunta, and Patrick Dondl.
In this talk, I will address the problem of recovering periodic source terms in a discrete dynamical system represented by x_{n+1} = Ax_n + w(n), where x_n is the n-th state in a Hilbert space H, A is a bounded linear operator in B(H), and w(n) is a source term within a closed subspace W of H. The focus is on the stable recovery of w using time-space sample measurements formed by inner products with vectors from a Bessel system G ⊂ H. These types of results may be relevant to applications such as environmental monitoring, where precise source identification is critical.