One World Mathematics of INformation, Data, and Signals (1W-MINDS) Seminar
Given the impossibility of travel during the COVID-19 crisis the One World MINDS seminar was founded as an inter-institutional global online seminar aimed at giving researchers interested in mathematical data science, computational harmonic analysis, and related applications access to high quality talks. Talks are held on Thursdays either at 2:30 pm New York time or at 10:00 am Paris /4:00 pm summer Shanghai time/ 5:00 pm winter Shanghai time.
Current Organizers (September 2022 - May 2023): Axel Flinth (Principal Organizer for Europe/Asia, Umeå University), Longxiu Huang (Principal Organizer for The Americas, Michigan State University), Alex Cloninger (UC San Diego), Jamie Haddock (Harvey Mudd College), Mark Iwen (Michigan State University), Felix Krahmer (Technische Universität München), Weilin Li (City College of New York), Karin Schnass (University of Innsbruck), and Yuying Xie (Michigan State University).
To sign up to receive email announcements about upcoming talks, click here.
To join MINDS slack channel, click here.
The organizers would like to acknowledge support from the Michigan State University Department of Mathematics. Thank you.
Passcode: the smallest prime > 100
Zoom Link for all 10:00 am Paris/4:00 pm Summer Shanghai/5 pm Winter Shanghai time Talks: Paris/Shanghai link
Passcode: The integer part and first five decimals of e (Eulers number)
FUTURE TALKS
Combining network analysis and persistent homology for classifying behavior of time series
Persistent homology, the flagship method of topological data analysis, can be used to provide a quantitative summary of the shape of data. One way to pass data to this method is to start with a finite, discrete metric space (whether or not it arises from a Euclidean embedding) and to study the resulting filtration of the Rips complex. In this talk, we will discuss several available methods for turning a time series into a discrete metric space, including the Takens embedding, $k$-nearest neighbor networks, and ordinal partition networks. Combined with persistent homology and machine learning methods, we show how this can be used to classify behavior in time series in both synthetic and experimental data.
Title: Nonnegative Matrix Factorization: Introduction, Identifiability and Beyond
Abstract: Given a nonnegative matrix X and a factorization rank r, nonnegative matrix factorization (NMF) approximates the matrix X as the product of a nonnegative matrix W with r columns and a nonnegative matrix H with r rows. NMF has become a standard linear dimensionality reduction technique in data mining and machine learning. In this talk, we first introduce NMF and show how it can be used as an interpretable unsupervised data analysis tool in various applications, including hyperspectral image unmixing, image feature extraction, and document classification. Then, we discuss the issue of non-uniqueness of NMF decompositions, also known as the identifiability issue, which is crucial in many applications. Finally, we discuss how we can go beyond NMF by considering non-linear and deep extensions which are useful in real-world applications and offer many venues for future research.
Title: TBA
Abstract: TBA
Title: TBA
Abstract: TBA
April 20: Clarice Poon [10:00 am Paris/ 4:00 pm Shanghai time]
Title: TBA
Abstract: TBA
April 27: TBA [2:30 pm New York time]
Title: TBA
Abstract: TBA
May 25: TBA [2:30 pm New York time]
Title: TBA
Abstract: TBA
June 1: TBA [10:00 am Paris/ 4:00 pm Shanghai time]
Title: TBA
Abstract: TBA