Talks

All times are in BST (the current time in the UK).

All talks are hosted on Microsoft Teams. The links in the abstracts below are to the corresponding Microsoft Teams event for that talk. See the Registration page for further details about joining the talks.

For a list of talks ordered by date and time, see the Schedule page.

Henry AdamsApplied topology: from global to local

Wednesday 25th August, 14:30-15:30

Through the use of examples, I will explain one way in which applied topology has evolved since the birth of persistent homology in the early 2000s. The first applications of topology to data emphasized the global shape of a dataset, such as the three-circle model for 3 x 3 pixel patches from natural images, or the configuration space of the cyclo-octane molecule, which is a sphere with a Klein bottle attached via two circles of singularity. More recently, persistent homology is being used to measure the local geometry of data. How do you vectorize geometry for use in machine learning problems? Persistent homology, and its vectorization techniques including persistence landscapes and persistence images, provide popular techniques for incorporating geometry in machine learning. I will survey applications arising from machine learning tasks in agent-based modeling, shape recognition, materials science, and biology.

Slides Video (YouTube) Video (Vimeo)

Riccardo FinotelloAlgebraic geometry and computer vision: inception neural network for Calabi-Yau manifolds

Thursday 26th August, 13:00-14:00

Computing topological properties of Calabi-Yau manifolds is, in general, a challenging mathematical task: traditional methods lead to complicated algorithms, without expressions in closed form in most cases. At the same time, recent years have witnessed the rising use of deep learning as a method for exploration of large sets of data, to learn their patterns and properties. This is specifically interesting when it comes to unravel complicated geometrical structures, as it is a central issue both in mathematics and theoretical physics, as well as in the development of trustworthy AI methods. Motivated by their distinguished role in string theory for the study of compactifications, we compute the Hodge numbers of Complete Intersection Calabi-Yau (CICY) manifolds using deep neural networks. Specifically, we introduce new regression architectures, inspired by Google's Inception network and multi-task learning, which leverage the theoretical knowledge on the inputs with recent advancements in AI. This shows the potential of deep learning to learn from geometrical data, and it proves the versatility of architectures developed in different contexts, which may therefore find their way in theoretical physics and mathematics for exploration and inference.

Slides Video (YouTube) Video (Vimeo)

Siu-Cheong LauDeep learning over the moduli space of quiver representations

Wednesday 25th August, 17:00-18:00

It is interesting to observe that neural network in machine learning has a similar basic setup as quiver representation theory. In this talk, I will build an algebro-geometric formulation of a computing machine, which is well-defined over the moduli space of representations. I will also explain a uniformization between spherical, Euclidean and hyperbolic moduli of framed quiver representations, and construct a learning algorithm over these moduli spaces.

Slides Video (YouTube) Video (Vimeo)

Kyu-Hwan Lee – Applications of machine learning to data from number theory

Thursday 26th August, 14:30-15:30

In this talk, we apply machine learning techniques to various data from the L-functions and modular forms database (LMFDB) and show that a machine can be trained to distinguish objects in number theory according to their standard invariants. The applications in this talk will include class numbers of quadratic number fields, ranks of elliptic curves, Sato-Tate groups of genus 2 curves. This is joint work with Yang-Hui He and Thomas Oliver.

Slides Video (YouTube) Video (Vimeo)

Minhyong Kim – How hard is it to learn a mathematical structure?

Wednesday 25th August, 13:15-14:15

Slides Video (YouTube) Video (Vimeo)

Sonja Petrovic – Learning in commutative algebra & models for random algebraic structures

Wednesday 25th August, 15:30-16:30

A commutative algebraist's interest in randomness has many facets, of which this talk highlights two. Namely, we will discuss 1) how to use basic statistics and learning for improving Buchberger's algorithm and 2) how to generate samples of ideals in a `controlled' way. The two topics, based on joint work with various collaborators and students, form a two-step process in learning on algebraic structures, designed with the aim of avoiding the 'danger zone' of blind machine learning over uninteresting distributions.

For learning, we show that a multiple linear regression model built from a set of easy-to-compute ideal generator statistics can predict the number of polynomial additions somewhat well, better than an uninformed model, and better than regression models built on some intuitive commutative algebra invariants that are more difficult to compute. We also train a simple recursive neural network that outperforms these linear models. Our work serves as a proof of concept, demonstrating that predicting the number of polynomial additions in Buchberger's algorithm is a feasible problem from the point of view of machine learning.

As a first example of sampling, we present random monomial ideals, using which we prove theorems about the probability distributions, expectations and thresholds for events involving monomial ideals with given Hilbert function, Krull dimension, first graded Betti numbers, and present several experimentally-backed conjectures about regularity, projective dimension, strong genericity, and Cohen-Macaulayness of random monomial ideals. The models for monomial ideals can be used as a basis for generating other types of algebraic objects, and proving existence of desired properties.

Video (YouTube) Video (Vimeo)

Ruriko Yoshida – Tree topologies along a tropical line segment

Thursday 26th August, 17:00-18:00

Tropical geometry with the max-plus algebra has been applied to statistical learning models over the spaces of phylogenetic trees because geometry with the tropical metric over tree spaces has some nice properties such as convexity in terms of the tropical metric. One of the challenges in applications of tropical geometry to tree spaces is the difficulty interpreting outcomes of statistical models with the tropical metric. This talk focuses on combinatorics of tree topologies along a tropical line segment, an intrinsic geodesic with the tropical metric, between two phylogenetic trees over the tree space and we show some properties of a tropical line segment between two trees. Specifically, we show that a probability of a tropical line segment of two randomly chosen trees going through the origin (the star tree) is zero and we also show that if two given trees differ only one nearest neighbor interchange (NNI) move, then the tree topology of a tree in the tropical line segment between them is the same tree topology of one of these given two trees with possible zero branch lengths.

Slides Video (YouTube) Video (Vimeo)

Roozbeh YousefzadehDeep learning generalization, extrapolation, over-parameterization and decision boundaries

Thursday 26th August, 15:30-16:30

Deep neural networks have achieved great success, most notably in learning to classify images. Yet, the phenomenon of learning images is not well understood, and generalization of deep networks is considered a mystery. Recent studies have explained the generalization of deep networks within the framework of interpolation. In this talk, we will see that the task of classifying images requires extrapolation capability, and interpolation by itself is not adequate to understand functional task of deep networks. We study image classification datasets in the pixel space, in the feature space learned by DNNs in their internal layers, and also in the low-dimensional feature space that one can derive using wavelets/shearlets. We show that in all these spaces, image classification remains an extrapolation task to a moderate (yet considerable) degree outside the convex hull of training set. For few-shot learning, extrapolation is even more significant, yet possible. By reviewing the cognitive science literature, we see that extrapolation and learning can actually go together. From the mathematical perspective, a deep learning image classifier is a function that partitions its domain (the pixel and also the feature space in its internal layers) and assigns a class to each partition. Partitions are defined by decision boundaries and so is the model. The domain of our function can be considered a hypercube while the convex hull of training set occupies only a portion of that hypercube. Since testing samples are outside that convex hull, the extensions of decision boundaries are crucial in model's generalization. From this perspective and using the Weierstrass Approximation Theorem, we argue that over-parameterization is a necessary condition for the ability to control the extensions of decision boundaries. Over-parameterization then works in tandem with training regime to partition the domain desirably outside the convex hull of training set. I will also present a homotopy algorithm for computing points on the decision boundaries of deep networks, and finally, I will explain how we can leverage the decision boundaries to audit and debug ML models used in social applications.

Slides Video (YouTube) Video (Vimeo)