Talks

  • Paul Breiding (Universität Kassel): Euclidean Distance Degree and Mixed Volume

Abstract: The Euclidean Distance Degree (EDD) of an algebraic variety V counts the number of complex critical points of the distance function from V to a generic fixed point outside of V.

The BKK-Theorem (Bernstein , Kushnirenko, and Khovanskii) says that the number of complex zeros of a generic sparse polynomial system is equal to the mixed volume of the Newton polytopes of the polynomials.

In this talk I want to discuss that for a generic sparse polynomial f the EDD of the hypersurface f=0 equals the mixed volume of the Lagrange multiplier equations for the EDD.

This has impact on using polynomial homotopy continuation for computing the ED-critical points. And it provides new formulas for the EDD (Joint work with Frank Sottile and James Woodcock).

Recording

  • Michael R Douglas (CMSA Harvard and Stony Brook University): Numerical Calabi-Yau metrics from holomorphic networks

Abstract: We propose machine learning inspired methods for computing numerical Calabi-Yau (Ricci flat Kähler) metrics, and implement them using Tensorflow/Keras. We compare them with previous work, and find that they are far more accurate for manifolds with little or no symmetry. We also discuss issues such as overparameterization and choice of optimization methods.

Joint work with Subramanian Lakshminarasimhan and Yidi Qi.


Recording

  • James Halverson (Northeastern University): Knots and Natural Language

Abstract: We introduce natural language processing into the study of knot theory, as made natural by the braid word representation of knots. We study the UNKNOT problem of determining whether or not a given knot is the unknot. After describing an algorithm to randomly generate $N$-crossing braids and their knot closures and discussing the induced prior on the distribution of knots, we apply binary classification to the UNKNOT decision problem. We find that the Reformer and shared-QK Transformer network architectures outperform fully-connected networks, though all perform well. Perhaps surprisingly, we find that accuracy increases with the length of the braid word, and that the networks learn a direct correlation between the confidence of their predictions and the degree of the Jones polynomial. Finally, we utilize reinforcement learning (RL) to find sequences of Markov moves and braid relations that simplify knots and can identify unknots by explicitly giving the sequence of unknotting actions. Trust region policy optimization (TRPO) performs consistently well for a wide range of crossing numbers and thoroughly outperformed other RL algorithms and random walkers. Studying these actions, we find that braid relations are more useful in simplifying to the unknot than one of the Markov moves.


Recording

  • Wenrui Hao (Penn State University): Homotopy training algorithm for neural networks and applications in pattern formation

  • Vishnu Jejjala (University of the Witwatersrand): (K)not Machine Learning

  • Joe Kileel (University of Texas at Austin): Geometry and Optimization of Shallow Polynomial Networks


  • Michael Kirby (Colorado State University): The Grassmanian & Flag Manifolds for Analysing Data


  • Kathlén Kohn (KTH Royal Institute of Technology): The Geometry of Neural Networks

  • Sven Krippendorf (LMU): Learning Symmetries and Conserved Quantities of Physical Systems

  • Zehua Lai (University of Chicago): Recht–Re Noncommutative Arithmetic-Geometric Mean Conjecture is False

Abstract: Stochastic optimization algorithms have become indispensable in modern machine learning. An important question in this area is the difference between with-replacement sampling and without-replacement sampling --- does the latter have superior convergence rate compared to the former? A paper of Recht and Re reduces the problem to a noncommutative analogue of the arithmetic-geometric mean inequality where n positive numbers are replaced by n positive definite matrices. If this inequality holds for all n, then without-replacement sampling (also known as random reshuffling) indeed outperforms with-replacement sampling in some important optimization problems. In this talk, We will explain basic ideas and techniques in polynomial optimization and the theory of noncommutative Positivstellensatz, which allows us to reduce the conjectured inequality to a semidefinite program and the validity of the conjecture to certain bounds for the optimum values. Finally, we show that Recht--Re conjecture is false as soon as $n = 5$. This is a joint work with Lek-Heng Lim.


Recording

  • Shailesh Lal (Universidade do Porto): Machine Learning Etudes For Symmetries

Abstract: We demonstrate how modest feed forward neural nets learn symmetry in datasets, with special focus on conformal symmetry. We

also show how aspects of Lie algebra representation theory computations are machine learnable. The talk will be based on 2006.16114 and 2011.00871.


Recording

  • Joseph Landsberg (Texas A&M University): Tensors and algebraic geometry

Abstract: I will give an overview of the use of algebraic geometry in the study of tensors. The algebraic geometry involved is both classical (secant varieties, vector bundles on projective space) and quasi-modern (deformation theory, the Haiman-Sturmfels multi-graded Hilbert scheme, and the Quot scheme). I will also, as time permits, explain applications to complexity theory and quantum information theory.


Recording

  • Cody Long (Harvard): Statistical Predictions in String Theory and Deep Generative Models

Abstract: Generative models in deep learning allow for sampling probability distributions that approximate data distributions. I will discuss using generative models for making approximate statistical predictions in the string theory landscape. For vacua admitting a Lagrangian description this can be thought of as learning random tensor approximations of couplings. As a concrete example I will demonstrate that a large ensemble of metrics on Kähler moduli space of Calabi-Yau threefolds are well-approximated by ensembles of matrices produced by a deep convolutional Wasserstein GAN.


Recording

  • Andre Lukas (Oxford): String Data and Machine Learning


  • Challenger Mishra (Cambridge): Neural Network Approximations for Calabi-Yau Metrics

Abstract: Ricci flat metrics for Calabi-Yau threefolds are not known analytically. In this work, we employ techniques from machine learning to deduce numerical flat metrics for the Fermat quintic, for the Dwork quintic, and for the Tian-Yau manifold. This investigation employs a single neural network architecture that is capable of approximating Ricci flat Kaehler metrics for several Calabi-Yau manifolds of dimensions two and three. We show that measures that assess the Ricci flatness of the geometry decrease after training by three orders of magnitude. This is corroborated on the validation set, where the improvement is more modest. Finally, we demonstrate that discrete symmetries of manifolds can be learned in the process of learning the metric.


Recording

  • Bernard Mourrain (Inria): The Geometry of Moments, Tensor Decomposition, Machine Learning and Applications

  • Giuseppe Pitton (Imperial): Computation, Data Analysis, and Statistical Inference for classes of Maximally-Mutable Laurent Polynomials

Abstract: Over the last 10 years, the collective efforts of pure mathematicians involved in the Fanosearch project [1, 2] have led to remarkable advances in the understanding of Fano classification using ideas from Mirror Symmetry.

A fundamental part of the Fanosearch project is the computation of some Laurent polynomials, called Maximally-Mutable Laurent Polynomials, that are naturally associated to a class of lattice polytopes.

Despite the tremendous computational challenges involved, we can construct mirror polynomials for a large sets of lattice polytopes.

In this talk I will describe some recent computational results for the specific case of canonical Fano 3-topes.

I will discuss in particular how Data Analysis helps us explore the wealth of information that results from our computations, and how we take advantage of algorithms from Statistical Inference to test known and conjectured results.

This is a joint work (in progress) with Tom Coates and Alexander Kasprzyk.

[1] doi:10.4171/120-1/16

[2] doi:10.1090/proc/12876


  • Margaret Regan (Duke University): Using machine learning to determine the real discriminant locus

Abstract: Parameterized systems of polynomial equations arise in many applications in science and engineering with the real solutions describing, for example, equilibria of a dynamical system, linkages satisfying design constraints, and scene reconstruction in computer vision. Since different parameter values can have a different number of real solutions, the parameter space is decomposed into regions whose boundary forms the real discriminant locus. In this talk, I will discuss a novel sampling method for multidimensional parameter spaces and how it is used in various machine learning algorithms to locate the real discriminant locus as a supervised classification problem, where the classes are the number of real solutions. Examples such as the Kuramoto model will be used to show the efficacy of the methods. Finally, an application to real parameter homotopy methods will be presented. This project is joint work with Edgar Bernal, Jonathan Hauenstein, Dhagash Mehta, and Tingting Tang.


Recording

  • Fabian Ruhle (CERN): Moduli-dependent Calabi-Yau and SU(3)-structure metrics from Machine Learning

Abstract: Calabi-Yau manifolds play a crucial role in string compactifications. Yau's theorem guarantees the existence of a metric that satisfies the string's equation of motion. However, Yau's proof is non-constructive, and no analytic expressions for metrics on Calabi-Yau threefolds are known. We use neural networks to learn Calabi-Yau metrics and their complex structure moduli dependence. After a short introduction to CY manifolds, I will illustrate how we train neural networks to find Calabi-Yau metrics by using the underlying partial differential equations as loss functions. The approach generalizes to more general manifolds and can hence also be used for manifolds with reduced structure, such as SU(3) structure or G2 manifolds, which feature in string compactifications with flux and in the M-theory formulation of string theory, respectively. I will illustrate this generalization for a particular SU(3) structure metric and compare the machine learning result to the known, analytic expression.


Recording

  • Rak-Kyeong Seong (Samsung): Reinforcement Learning for Optimization Problems


  • Bernd Sturmfels (MPI Leipzig): Wasserstein distance to independence models

  • Tingting Tang (San Deigo State University): The Loss Surface Of Deep Linear Networks Viewed Through The Algebraic Geometry Lens