Code of Conduct
We are committed to make the summer school a safe and welcoming space for everyone. Please, take a moment to review our Code of Conduct, which outlines our expectations for participant behavior.
Omer Bobrowski (Queen Mary University, London)
Random topology – theory and applications
Lecture #1: Introduction
Lecture #2: Morse Theory in Stochastic Topology
Lecture #3: Percolation Theory & Stochastic Topology
Lecture #4: Universality in TDA
Anna Gusakova (University of Münster)
From random polytopes to random tessellations - an overview
Anthea Monod (Imperial College, London)
Persistent Homology in Statistics, Machine Learning, and AI
Raphaël Lachièze-Rey (INRIA Paris)
Limit theorems for topological and geometric functionals
The poster session is now full! Any further poster registrations will be put on a waitlist.
We plan to organize a poster session on the first afternoon. This session will provide the participants with the opportunity to tell their peers about their own research topics. This poster session will both serve as an opportunity for scientific exchange and will also help to initiate networking among the participants. It will greatly facilitate the interactions during the entire week when the participants know about the mutual research interests right from day 1
Ghuru Ganeshan
Extracting High Quality Data Subsets using Monte-Carlo Search
Often, big or imbalanced datasets are subject to undersampling in order to improve the overall accuracy of the underlying predictive model. In this paper, we study undersampling from the perspective of information content and propose and analyze a Monte-Carlo search methodology to obtain high quality, low redundancy data subsets from large datasets. Our strategy is to transform the data redundancy structure into a graph theoretic format and assign importance to each data point in terms of vertex degrees. We then extract low redundancy data points via iteration, allowing for random exploration in the intermediate steps. Our simulation results indicate that if the redundancy graph is sufficiently sparse, then there is a non-trivial exploration probability that maximizes the quality of the final dataset. We illustrate our methodologies using the zoo animal dataset available in Kaggle and also discuss theoretical questions for potential future research, regarding the properties of the optimal exploration probability.
Vincent Grande
Point-Level Topological Representation Learning on Point Clouds
Topological Data Analysis (TDA) allows us to extract powerful topological and higher-order information on the global shape of a data set or point cloud. Tools like Persistent Homology or the Euler Transform give a single complex description of the global structure of the point cloud. However, common machine learning applications like classification require point-level information and features to be available. In this paper, we bridge this gap and propose a novel method to extract node-level topological features from complex point clouds using discrete variants of concepts from algebraic topology and differential geometry. We verify the effectiveness of these topological point features (TOPF) on both synthetic and real-world data and study their robustness under noise and heterogeneous sampling.
Wojciech Chachólski, Christina Kapatsori, and Björn Wehlin
Random complexes in the metric space setting
We are interested in random complexes in arbitrary metric spaces. Our focus is on these complexes which have certain homotopical properties allowing for control of their homotopy-invariant features in a probabilistic setting. We will share some of our early progress and candidate definitions in this context. For example, we illustrate how these homotopical aspects imply measurability of associated random variables. This is joint work with Christina Kapatsori and Wojciech Chachólski.