Note: All times are UK times.
11:00-12:00
Tom Coates (Imperial)
Machine Learning Detects Terminal Singularities
I will describe an example of AI-assisted mathematical discovery, which is joint work with Al Kasprzyk and Sara Veneziale. We consider the problem of determining whether a toric variety is a Q-Fano variety. Q-Fano varieties are Fano varieties that have mild singularities called terminal singularities; they play a key role in the Minimal Model Programme. Except for the special case of weighted projective spaces, no efficient global algorithm for checking terminality of toric varieties was known.
We show that, for eight-dimensional Fano toric varieties X of Picard rank two, a simple feedforward neural network can predict with 95% accuracy whether or not X has terminal singularities. The input data to the neural network is the weights of the toric variety X; this is a matrix of integers that determines X. We use the neural network to give the first sketch of the landscape of Q-Fano varieties in eight dimensions.
Inspired by the ML analysis, we formulate and prove a new global, combinatorial criterion for a toric variety of Picard rank two to have terminal singularities. This gives new evidence that machine learning can be a powerful tool in developing mathematical conjectures and accelerating theoretical discovery.
13:00-14:00
Matija Kazalicki (Zagreb)
Ranks of elliptic curves and deep neural networks
Determining the rank of an elliptic curve E/Q is a difficult problem. In applications such as the search for curves of high rank, one often relies on heuristics to estimate the analytic rank (which is equal to the rank under the Birch and Swinnerton-Dyer conjecture).
In this talk, we discuss a novel rank classification method based on deep convolutional neural networks (CNNs). The method takes as input the conductor of E and a sequence of normalized Frobenius traces a_p for primes p in a certain range (p<10^k for k=3,4,5), and aims to predict the rank or detect curves of "high" rank. We compare our method with eight simple neural network models of the Mestre-Nagao sums, which are widely used heuristics for estimating the rank of elliptic curves.
We evaluate our method on two datasets: the LMFDB and a custom dataset consisting of elliptic curves with trivial torsion, conductor up to 10^30, and rank up to 10. Our experiments demonstrate that the CNNs outperform the Mestre-Nagao sums on the LMFDB dataset. On the custom dataset, the performance of the CNNs and the Mestre-Nagao sums is comparable. This is joint work with Domagoj Vlah.
14:15-15:15
Ed Hirst (Queen Mary)
Machine Learning Sasakian and G2 topology on contact Calabi-Yau 7-manifolds
Calabi-Yau links are constructed for all 7555 weighted projective spaces with Calabi-Yau 3-fold hypersurfaces. Topological properties such as the Crowley-Nordström invariants and Sasakian Hodge numbers are computed, leading to new invariant values and some conjectures on their construction. Machine learning methods are implemented to predict these invariants, as well as to optimise their computation via Gröbner bases.
15:30-16:30
Kathlén Kohn (KTH)
Understanding Linear Convolutional Neural Networks via Sparse Factorizations of Real Polynomials
This talk will explain that Convolutional Neural Networks without activation parametrize semialgebraic sets of real homogeneous polynomials that admit a certain sparse factorization. We will investigate how the geometry of these semialgebraic sets (e.g., their singularities and relative boundary) changes with the network architecture. Moreover, we will explore how these geometric properties affect the optimization of a loss function for given training data. This talk is based on joint work with Guido Montúfar, Vahid Shahverdi, and Matthew Trager.
11:00-12:00
Challenger Mishra (Cambridge)
Mathematical Conjecture Generation and Machine Intelligence
Conjectures hold a special status in mathematics. Good conjectures epitomise milestones in mathematical discovery, and have historically inspired new mathematics and shaped progress in theoretical physics. Hilbert’s list of 23 problems and André Weil’s conjectures oversaw major developments in mathematics for decades. Crafting conjectures can often be understood as a problem in pattern recognition, for which Machine Learning (ML) is tailor-made. In this talk, I will propose a framework that allows a principled study of a space of mathematical conjectures. Using this framework and exploiting domain knowledge and machine learning, we generate a number of conjectures in number theory and group theory. I will present evidence in support of some of the resulting conjectures and present a new theorem. I will lay out a vision for this endeavour, and conclude by posing some general questions about the pipeline.
13:00-14:00
Malik Amir (SolutionAI and Montréal)
Data-Driven Insights into the Rank of Elliptic Curves of Prime Conductors
In this presentation, we explore the intersection of data science and elliptic curves of prime conductor. We will begin with a quick introduction to elliptic curves before introducing the celebrated Birch and Swinnerton-Dyer conjecture. We will discuss the original insight of Birch and Swinnerton-Dyer concerning the traces of Frobenius and what they know about certain mathematical data attached to elliptic curves. We will be especially interested in the rank of elliptic curves of prime conductor. All along this talk, we will present experiments performed on the largest known dataset of such elliptic curves: the Bennett-Gherga-Retchnizer dataset, and will explicitly formulate open questions based on these observations. We will discuss some tension between data and the minimalist conjecture which stipulates that the average rank should be 1/2. Among the various data scientific experiments that were performed, we will describe an interesting bias that exists between the distribution of the 2-torsion coefficients and the distribution of the rank. Finally we will discuss the importance of simple machine learning models for predicting the rank based on the traces of Frobenius.
14:15-15:15
Elli Heyes (LIMS)
New Calabi-Yau Manifolds from Genetic Algorithms
Calabi-Yau manifolds can be obtained as hypersurfaces in toric varieties built from reflexive polytopes. We generate reflexive polytopes in various dimensions using a genetic algorithm. As a proof of principle, we demonstrate that our algorithm reproduces the full set of reflexive polytopes in two and three dimensions, and in four dimensions with a small number of vertices and points. Motivated by this result, we construct five-dimensional reflexive polytopes with the lowest number of vertices and points. By calculating the normal form of the polytopes, we establish that many of these are not in existing datasets and therefore give rise to new Calabi-Yau four-folds.
15:30-16:30
Katia Matcheva (Florida)
Deep Learning Symmetries in Physics from First Principles
Symmetries are the cornerstones of modern theoretical physics, as they imply fundamental conservation laws. The recent boom in AI algorithms and their successful application to high-dimensional large datasets from all aspects of life motivates us to approach the problem of discovery and identification of symmetries in physics as a machine-learning task. In a series of papers, we have developed and tested a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset. We use fully connected neural network architectures to model the symmetry transformations and the corresponding generators. Our proposed loss functions ensure that the applied transformations are symmetries and that the corresponding set of generators is orthonormal and forms a closed algebra. One variant of our method is designed to discover symmetries in a reduced-dimensionality latent space, while another variant is capable of obtaining the generators in the canonical sparse representation. Our procedure is completely agnostic and has been validated with several examples illustrating the discovery of the symmetries behind the orthogonal, unitary, Lorentz, and exceptional Lie groups.