One World Mathematics of INformation, Data, and Signals (1W-MINDS) Seminar

Given the impossibility of travel during the COVID-19 crisis the One World MINDS seminar was founded as an inter-institutional global online seminar aimed at giving researchers interested in mathematical data science, computational harmonic analysis, and related applications access to high quality talks. Talks are held on Thursdays at 2:30 PM EDT unless otherwise noted below.

Current Organizers (July 2021 - May 2022): Matthew Hirn (Principal Organizer, Michigan State University), Mark Iwen (Michigan State University), Felix Krahmer (Technische Universität München), Shuyang Ling (New York University Shanghai), Rayan Saab, (University of California, San Diego), Karin Schnass (University of Innsbruck), and Soledad Villar (Johns Hopkins University)

Founding Organizers (April 2020 - June 2021): Mark Iwen (Principal Organizer, Michigan State University), Bubacarr Bah (African Institute for Mathematical Sciences South Africa), Afonso Bandeira (ETH-Zurich), Matthew Hirn (Michigan State University), Felix Krahmer (Technische Universität München), Shuyang Ling (New York University Shanghai), Ursula Molter (Universidad de Buenos Aires), Deanna Needell (University of California, Los Angeles), Rayan Saab, (University of California, San Diego), and Rongrong Wang (Michigan State University)

For information on previous talks, videos, etc, visit our Past Talks page.

To sign up to receive email announcements about upcoming talks, click here.


The organizers would like to acknowledge support from the Michigan State University Department of Mathematics. Thank you.

Use This Zoom Link for all 2:30 pm New York Time Talks Scheduled Below. The passcode is the first prime larger than 100.

Use This Zoom Link for all 4:30 pm Shanghai Time/ 10:30 am Paris Time Talks Scheduled Below. The passcode is 2021 followed by the largest prime under 100.

June 17: Wenjing Liao (Georgia Tech)

Regression and doubly robust off-policy learning on low-dimensional manifolds by neural networks

Many data in real-world applications are in a high-dimensional space but exhibit low-dimensional structures. In mathematics, these data can be modeled as random samples on a low-dimensional manifold. Our goal is to estimate a target function or learn an optimal policy using neural networks. This talk is based on an efficient approximation theory of deep ReLU networks for functions supported on a low-dimensional manifold. We further establish the sample complexity for regression and off-policy learning with finite samples of data. When data are sampled on a low-dimensional manifold, the sample complexity crucially depends on the intrinsic dimension of the manifold instead of the ambient dimension of the data. These results demonstrate that deep neural networks are adaptive to low-dimensional geometric structures of data sets. This is a joint work with Minshuo Chen, Haoming Jiang, Liu Hao, Tuo Zhao at Georgia Institute of Technology.

June 24: Qiang Ye (University of Kentucky)

Batch Normalization and Preconditioning for Neural Network Training

Batch normalization (BN) is a popular and ubiquitous method in deep neural network training that has been shown to decrease training time and improve generalization performance. Despite its success, BN is not theoretically well understood. It is not suitable for use with very small mini-batch sizes or online learning. In this talk, we will review BN and present a preconditioning method called Batch Normalization Preconditioning (BNP) to accelerate neural network training. We will analyze the effects of mini-batch statistics of a hidden variable on the Hessian matrix of a loss function and propose a parameter transformation that is equivalent to normalizing the hidden variables to improve the conditioning of the Hessian. Compared with BN, one benefit of BNP is that it is not constrained on the mini-batch size and works in the online learning setting. We will present several experiments demonstrating competitiveness of BNP. Furthermore, we will discuss a connection to BN which provides theoretical insights on how BN improves training and how BN is applied to special architectures such as convolutional neural networks.

The talk is based on a joint work with Susanna Lange and Kyle Helfrich.

July 15: Qi (Rose) Yu (University of California, San Diego)

TBA

TBA

July 22: Yaniv Plan (University of British Columbia)

TBA

TBA

July 29: Wotao Yin (UCLA)

TBA

TBA

August 12: Andrea Bertozzi (UCLA)

TBA

TBA

August 26: Deanna Needell (UCLA)

TBA

TBA

September 2: Jonathan Scarlett (National University of Singapore)[Alt. Time - 4:30 pm Shanghai GMT+8, 10:30 am Paris (CEST)]

TBA

TBA

September 16: Russell Luke (University of Göttingen)

TBA

TBA

September 23: Joel Tropp (California Institute of Technology)

TBA

TBA

September 30: Ursula Molter (Universidad de Buenos Aires)

TBA

TBA

October 7: Afonso Bandeira (ETHZ - Swiss Federal Institute of Technology Zürich)[Alt. Time - 4:30 pm Shanghai GMT+8, 10:30 am Paris (CEST)]

TBA

TBA


October 21: Bubacarr Bah (African Institute for Mathematical Sciences South Africa)[Alt. Time - 4:30 pm Shanghai GMT+8, 10:30 am Paris (CEST)]

TBA

TBA

October 28: Rongrong Wang (Michigan State University)

TBA

TBA