One World Mathematics of INformation, Data, and Signals (1W-MINDS) Seminar

The 1W-MINDS Seminar was founded in the early days of the COVID-19 pandemic to mitigate the impossibility of travel.  We have chosen to continue the seminar since to help form the basis of an inclusive community interested in mathematical data science, computational harmonic analysis, and related applications by providing free access to high quality talks without the need to travel.  In the spirit of environmental and social sustainability, we welcome you to participate in both the seminar, and our slack channel community!  Zoom talks are held on Thursdays either at 2:30 pm New York time or at 10:00 am Paris /4:00 pm summer Shanghai time/ 5:00 pm winter Shanghai time.  To find and join the 1W-MINDS slack channel, please click here.

Current Organizers (September 2024 - May 2025):  Axel Flinth (Principal Organizer for Europe/Asia, Umeå University), Christian Parkinson (Principal Organizer for The Americas, Michigan State University), Rima Alaifari (ETH Zürich), Alex Cloninger (UC San Diego), Longxiu Huang (Michigan State University), Mark Iwen (Michigan State University), Weilin Li (City College of New York), Siting Liu (UC Riverside), Kevin Miller (Brigham Young University), and Yong Sheng Soh (National University of Singapore).

Most previous talks are on the seminar YouTube channel.  You can catch up there, or even subscribe if you like.

To sign up to receive email announcements about upcoming talks, click here.
To join MINDS slack channel, click here.


The organizers would like to acknowledge support from the Michigan State University Department of Mathematics.  Thank you.

Zoom Link for all 2:30 pm New York time Talks: New York link 

Passcode: the smallest prime > 100 

Zoom Link for all 10:00 am Paris/4:00 pm Summer Shanghai/5 pm Winter Shanghai time Talks: Paris/Shanghai link

Passcode: The integer part and first five decimals of e (Eulers number)

FUTURE TALKS

December 13Ben Blum-Smith (Johns Hopkins University)

Title: Almost-universal symmetry-respecting machine learning


A standard way to assert that a machine learning (ML) architecture is expressive, is to prove that it has a universal approximation guarantee, i.e., that it can approximate functions in a given target class to arbitrary precision on compact subsets of the data space. In invariant and equivariant ML, the function class in question respects some built-in symmetry. Such function classes tend to have a more complicated structure than unrestricted function classes; as a result, architectures may need to be large (in relation to the data) in order to achieve universal approximation guarantees. In this talk, I discuss a workaround: jettisoning a small (measure-zero) subset of the data space in order to lower the computational cost of achieving universal approximation on the rest. The technique is illustrated on a type of symmetry relevant to point cloud data. Physical systems represented by point clouds tend to be symmetric with respect simultaneously to Euclidean isometries of space and also relabelings of the points. The known universal architectures that respect both these symmetries at once are prohibitively large when the point cloud is large; but giving up a measure-zero set of "bad" point clouds, we can achieve universality on the remaining ones with an architecture not much bigger than the input data. Joint work with Ningyuan (Teresa) Huang, Marco Cuturi, and Soledad Villar.


January 9:  Peng Wang (University of Michigan, Ann Arbor)

TBA

January 23Joe Kileel (University of Texas-Austin)

TBA

January 30: Jeff Calder (University of Minnesota)

TBA

February 6: Yingzhen Li (Imperial College London)

TBA

February 13: Ryan Murray (North Carolina State University)

TBA

February 20: Rafael Chiclana Vega (Michigan State University)

TBA

February 27: Lenaïc Chizat (École Polytechnique Fédérale de Lausanne)

TBA

March 27: Yusheng Xu (Old Dominion University)

TBA