One World Mathematics of INformation, Data, and Signals (1W-MINDS) Seminar

The 1W-MINDS Seminar was founded in the early days of the COVID-19 pandemic to mitigate the impossibility of travel.  We have chosen to continue the seminar since to help form the basis of an inclusive community interested in mathematical data science, computational harmonic analysis, and related applications by providing free access to high quality talks without the need to travel.  In the spirit of environmental and social sustainability, we welcome you to participate in both the seminar, and our slack channel community!  Zoom talks are held on Thursdays either at 2:30 pm New York time or at 10:00 am Paris /4:00 pm summer Shanghai time/ 5:00 pm winter Shanghai time.  To find and join the 1W-MINDS slack channel, please click here.

Current Organizers (September 2024 - May 2025):  Axel Flinth (Principal Organizer for Europe/Asia, Umeå University), Christian Parkinson (Principal Organizer for The Americas, Michigan State University), Rima Alaifari (ETH Zürich), Alex Cloninger (UC San Diego), Longxiu Huang (Michigan State University), Mark Iwen (Michigan State University), Weilin Li (City College of New York), Siting Liu (UC Riverside), Kevin Miller (Brigham Young University), and Yong Sheng Soh (National University of Singapore).

Most previous talks are on the seminar YouTube channel.  You can catch up there, or even subscribe if you like.

To sign up to receive email announcements about upcoming talks, click here.
To join MINDS slack channel, click here.


The organizers would like to acknowledge support from the Michigan State University Department of Mathematics.  Thank you.

Zoom Link for all 2:30 pm New York time Talks: New York link 

Passcode: the smallest prime > 100 

Zoom Link for all 10:00 am Paris/4:00 pm Summer Shanghai/5 pm Winter Shanghai time Talks: Paris/Shanghai link

Passcode: The integer part and first five decimals of e (Eulers number)

FUTURE TALKS

February 27: Lenaïc Chizat (École Polytechnique Fédérale de Lausanne)

TBA

March 6: Chad Topaz (Williams College)

TBA

March 13: Xie Yao (Georgia Institute of Technology)

TBA

March 20: Christoph Hertrich (Université libre de Bruxelles / University of Technology Nuremberg )

TBA

March 27: Yusheng Xu (Old Dominion University)

Title: Multi-Grade Deep Learning

The remarkable success of deep learning is widely recognized, yet its training process remains a black box. Standard deep learning relies on a single-grade approach, where a deep neural network (DNN) is trained end-to-end by solving a large, nonconvex optimization problem. As the depth of the network increases, this approach becomes computationally challenging due to the complexity of learning all weight matrices and bias vectors simultaneously. Inspired by the human education system, we propose a multi-grade deep learning (MGDL) model that structures learning into successive grades. Instead of solving a single large-scale optimization problem, MGDL decomposes the learning process into a sequence of smaller optimization problems, each corresponding to a grade. At each grade, a shallow neural network is learned to approximate the residual left from previous grades, and its parameters remain fixed in subsequent training. This hierarchical learning strategy mitigates the severity of nonconvexity in the original optimization problem, making training more efficient and stable. The final learned model takes a stair-shaped architecture, formed by the superposition of networks learned across all grades. MGDL naturally enables adaptive learning, allowing for the addition of new grades if the approximation error remains above a given tolerance. We establish theoretical guarantees in the context of function approximation, proving that if the newly learned network at a given grade is nontrivial, the optimal error is strictly reduced from the previous grade. Furthermore, we present numerical experiments demonstrating that MGDL significantly outperforms the conventional single-grade model in both efficiency and robustness to noise.

April 17: Robert Webber (University of California, San Diego)

TBA

April 24Donsub Rim (Washington University, St. Louis)

TBA

May 1: Yat Tin Chow (University of California, Riverside)

TBA

May 15: Wei Zhu (Georgia Institute of Technology)

TBA