Thursday, June 13th, 5:45-7:15pm and Friday, June 14th, 2:30-4pm
Streaming link: TBA
Chat link: TBA
This workshop brings attention to directional transforms, an area of applied topology that has received great interest in recent years. Directional transforms, including, but not limited to, the Euler Characteristic Transform, Persistent Homology Transform, are methods of capturing full information of input shape data in a form that is easily inputted into various pipelines, such as machine learning pipelines. Theoretical results guarantee that these two transforms completely capture shape data, giving credence to the viability of these transforms for analysis of real-world data. In addition to highlighting theoretical results on existing transforms, we hope to highlight research on other directional transforms, such as the merge tree directional transform.
This workshop has two major goals: On the one hand, we want to illuminate new research in the theoretical properties of directional transforms. On the other hand, we hope to have speakers that will highlight cutting-edge applications of directional transforms to real-world data sets.
Thursday, June 13th
5:45 pm: Sarah McGuire
6:20 pm: Erin Chambers
Friday, June 14th
2:30 pm: Anna Schenfisch
3:05 pm: Mattie Ji (virtual)
Speaker: Sarah McGuire
Title: Classification of Euler Characteristic Transforms using Convolutional Neural Networks
Abstract: The Euler Characteristic Transform (ECT) is a simple to define and simple to compute method which descriptively represents the topological shape of data. In contrast to alternative directional transform methods in TDA, the ECT is easier to compute and represent in a format well-suited for machine learning tasks. We propose to apply a particular choice of CNN architecture for classification of ECT data, leveraging the inherent structure of ECT data on a cylinder. In this talk, I will describe an ECT-CNN pipeline for classification, important rotation equivariance properties of the model, and applications to two leaf-shape datasets.
Speaker: Erin Chambers
Title: Bounding the Interleaving Distance for Mapper Graphs with a Loss Function
Abstract: Data consisting of a graph with a function mapping into $\R^d$ arise in many data applications, encompassing structures such as Reeb graphs, geometric graphs, and knot embeddings. As such, the ability to compare and cluster such objects is required in a data analysis pipeline, leading to a need for distances between them. In this work, we study the interleaving distance on discretization of these objects, $\R^d$-mapper graphs, where functor representations of the data can be compared by finding pairs of natural transformations between them. However, in many cases, computation of the interleaving distance is NP-hard. For this reason, we take inspiration from recent work by Robinson to find quality measures for families of maps that do not rise to the level of a natural transformation, called assignments. We then endow the functor images with the extra structure of a metric space and define a loss function which measures how far an assignment is from making the required diagrams of an interleaving commute. Finally we show that the computation of the loss function is polynomial with a given assignment. We believe this idea is both powerful and translatable, with the potential to provide approximations and bounds on interleavings in a broad array of contexts.
Joint work with Ishika Ghosh, Elizabeth Munch, Sarah Percival and Bei Wang.
Speaker: Anna Schenfisch
Title: Concise vs. verbose descriptors – how events with a trivial lifespan can have a non-trivial impact
Abstract: Standard methods for computing topological descriptors (e.g., persistence diagrams) output instantaneous events (e.g., on-diagonal points), which are then typically discarded. However, retaining events with trivial lifespan has dramatic advantages if our aim is to find finite sets of descriptors corresponding to lower-star filtrations that completely represent some underlying shape. In this talk, we explore key differences between such sets of concise descriptors, which are standard, vs sets of verbose descriptors, which retain instantaneous events. In particular, we discuss a few results for verbose transforms where establishing parallel results in the concise descriptor setting is particularly challenging.
Speaker: Mattie Ji
Title: Euler Characteristics and Homotopy Types of Definable Sublevel Sets
Abstract: Sublevel sets have been widely used in both pure and applied branches of mathematics. Motivated by Morse theory and topological data analysis (TDA), we devote this talk to the Euler characteristics and homotopy properties of sublevel sets in an o-minimal structure.\\
Given a definable function $f: S \to \mathbb{R}$ on a definable set $S$, we study sublevel sets of the form $S^f_t = \{x \in S: f(x) \leq t\}$ for all $t \in \mathbb{R}$. We prove that the Euler characteristic of $S^f_t$ is right continuous with respect to $t$. Furthermore, when $S$ is compact, we show that $S^f_{t+\delta}$ deformation retracts to $S^f_t$ for all sufficiently small $\delta > 0$. We apply these results to study integral transforms in TDA. Notably, we show that the Euler characteristic transform (ECT) is right continuous and therefore can be represented by the smooth Euler characteristic transform (SECT).\\
The majority of this talk is joint-work with Kun Meng and Kexin Ding, with some additional material from the presenter's own honors thesis.