All times are in the Central European Time (CET) zone.
The Zoom link can be found here:
Vladan B. Devedžić, University of Belgrade
Monday, Dec 15, 10:00-11:00
Title: Bridge over troubled water: Large Language Models in practice
Abstract: Large Language Models (LLMs) have rapidly evolved from research novelties into indispensable technological infrastructure. This talk moves beyond the theoretical potential and developments of LLMs per se to explore established, high-impact use cases across important domains. It examines how LLMs help bridge an essential gap, connecting raw data to actionable insights and solving complex development challenges. Specifically, the following five application areas are in the focus of the talk: medical research, where LLMs accelerate clinical data extraction, diagnostics, and novel therapeutic discoveries; software engineering, where LLMs drive tasks from code generation and debugging to documentation; the intersection of music and machine learning (ML), focusing on the sophisticated task of using LLMs in creating, curating, augmenting, and analyzing complex music datasets for advanced ML applications; education, where LLMs personalize learning and democratize tutoring, but also rise many practical challenges; research publishing, streamlining the synthesis, peer review, and dissemination of academic findings. The talk offers a pragmatic overview of the current state of LLM adoption, highlighting successes, addressing operational challenges, and providing a roadmap for future, context-specific integration.
Tony Shaska, Oakland University
Monday, Dec 15, 11:00-12:00
Title: Beyond Prediction: Crafting AI That Thinks with Graded Intelligence
Abstract: This talk presents a transformative framework for artificial neural networks over graded vector spaces, tailored to model hierarchical and structured data in fields like algebraic geometry and physics. By exploiting the algebraic properties of graded vector spaces, where features carry distinct weights, we extend classical neural networks with graded neurons, layers, and activation functions that preserve structural integrity. Grounded in group actions, representation theory, and graded algebra, our approach combines theoretical rigor with practical utility. We introduce graded neural architectures, loss functions prioritizing graded components, and equivariant extensions adaptable to diverse gradings. Case studies validate the framework's effectiveness, outperforming standard neural networks in tasks such as predicting invariants in weighted projective spaces and modeling supersymmetric systems. This work establishes a new frontier in machine learning, merging mathematical sophistication with interdisciplinary applications. Future challenges, including computational scalability and finite field extensions, offer rich opportunities for advancing this paradigm.
Gitta Kutyniok, Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence
Monday, Dec 15, 14:00-15:00
Title: Reliable and Sustainable AI: From Mathematical Foundations to Next Generation AI Computing
Abstract: The current wave of artificial intelligence is transforming industry, society, and the sciences at an unprecedented pace. Yet, despite its remarkable progress, today’s AI still suffers from two major limitations: a lack of reliability and excessive energy consumption.
This lecture will begin with an overview of this dynamic field, focusing first on reliability. We will present recent theoretical advances in the areas of generalization and explainability -- core aspects of trustworthy AI that also intersect with regulatory frameworks such as the EU AI Act. From there, we will explore fundamental limitations of existing AI systems, including challenges related to computability and the energy inefficiency of current digital hardware. These challenges highlight the pressing need to rethink the foundations of AI computing.
In the second part of the talk, we will turn to neuromorphic computing -- a promising and rapidly evolving paradigm that emulates biological neural systems using analog hardware. We will introduce spiking neural networks, a key model in this area, and share some of our recent mathematical findings. These results point toward a new generation of AI systems that are not only provably reliable but also sustainable.
Jurgen Mezinaj, Oakland University
Monday, Dec 15, 15:00-15:30
Title: From Polynomials to Databases: Arithmetic Structures in Galois Theory
Abstract: We develop a computational framework for classifying Galois groups of irreducible degree-7 polynomials over Q, combining explicit resolvent methods with machine learning techniques. A database of over one million normalized projective septics is constructed, each annotated with algebraic invariants J0, . . . , J4 derived from binary transvections. For each polynomial, we compute resolvent factorizations to determine its Galois group among the seven transitive subgroups of S7 identified by Foulkes. Using this dataset, we train a neurosymbolic classifier that integrates invariant-theoretic features with supervised learning, yielding improved accuracy in detecting rare solvable groups compared to coefficient-based models. The resulting database provides a reproducible resource for constructive Galois theory and supports empirical investigations into group distribution under height constraints. The methodology extends to higher-degree cases and illustrates the utility of hybrid symbolic-numeric techniques in computational algebra.
Sadok Kallel, American University of Sharjah
Monday, Dec 15, 15:30-16:00
Title: On the topology of piecewise geodesic loops
Abstract: We provide a number of “approximations” or “models” for the loop spaces, both based and unbased, of a compact Riemannian manifold M. These are group or H-space models constructed in terms of broken geodesic paths. We hope our models can have applications to machine learning algorithms.
Alexander Schmitt, Freie Universität Berlin
Monday, Dec 15, 16:00-16:30
Title: Moduli spaces attached to neural networks
Abstract: Armenta and Jodoin introduced moduli spaces of quiver representations as a tool for the theoretical investigation of neural networks, and Armenta, Brüstle, Hassoun, and Reineke established many geometric properties of those moduli spaces in a generality which goes beyond the standard setting of neural networks. In this talk, I will report on joint work with Armenta and Esquivel Araya concerning the geometry of the moduli spaces in the most basic case. The results concern equations for the moduli spaces and their real points. We also give a geometric proof of a result of Meng et al.
Fabian Ruehle, Northeastern University
Tuesday, Dec 16, 10:00-11:00
Title: Rigorous results from Machine Learning
Abstract: I will discuss two ways to obtain rigorous results from machine learning. The first is to use a more interpretable Neural Network architecture. I will explain Kolmogorov-Arnold Networks and how they can be used in symbolic regression. The second approach is to formulate the problem as a (single-player) game and solve it with Reinforcement Learning. By studying episodic rollouts, each step of the agent towards the solution can be studied and its correctness verified. By studying multiple rollouts, one can try to infer the heuristic learned by the RL agent, which can inform human solution strategies. I will illustrate these techniques with applications to representation theory and low-dimensional topology.
Vladimir Dragović, Department of Mathematical Sciences & Texas AI Research Institute, The University of Texas at Dallas
Tuesday, Dec 16, 11:00-12:00
Bridging Statistics with Geometry and Mechanics
Abstract: We emphasize the importance of bridges between statistics, mechanics, and geometry. We develop and employ links between pencils of quadrics, moments of inertia, and linear and orthogonal regressions. For a given system of points in $R^k$ representing a sample of a full rank, we construct a pencil of confocal quadrics which appears to be a useful geometric tool to study the data. Some of the obtained results can be seen as generalizations of classical results of Pearson on orthogonal regression. Applications include statistics of errors-in-variables models (EIV) and restricted regressions, both ordinary and orthogonal ones. For the latter, a new formula for test statistic is derived, using the Jacobi elliptic coordinates associated to the pencil of confocal quadrics. The developed methods and results are illustrated in natural statistics examples. The talk is based on a joint work with Borislav Gajić and the following papers:
[1] V. Dragović and B. Gajić, (2023) Points with rotational ellipsoids of inertia, envelopes of hyperplanes which equally fit the system of points in $R^k$, and ellipsoidal billiards, Physica D: Nonlinear Phenomena, 15 p. Volume 451, 133776
[2] V. Dragović and B. Gajić, (2025) Orthogonal and Linear Regressions and Pencils of Confocal, Quadrics, Statistical Science, Vol. 40, No. 2, 289 – 312, 2025
Yang-Hui He, London Institute for Mathematical Sciences
Tuesday, Dec 16, 14:00-15:00
Title: A Triumvirate of AI-assisted Mathematics
Abstract: We argue how AI can assist mathematics in three ways: theorem-proving, conjecture formulation, and language processing. Inspired by initial experiments in geometry and string theory in 2017, we summarize how this emerging field has grown over the past years, and show how various machine-learning algorithms can help with pattern detection across disciplines ranging from algebraic geometry to representation theory, to combinatorics, and to number theory. At the heart of the programme is the question how does AI help with theoretical discovery, and the implications for the future of mathematics.
Sajad Salami, Rio de Janeiro State University
Tuesday, Dec 16, 15:00-16:00
Q-rational points on loci of genus 2 curves with (n,n)-split Jacobians
Abstract: This talk is based on my current work \cite{ref1} in progress. First, we review the theory of the moduli space of genus 2 curves, with a focus on the problem of counting rational points, as well as the concept of weighted heights on weighted projective spaces and demonstrate how they provide a natural framework for studying this problem. Then, we consider the special loci $\cL_n$ for $n=2, 3,$ and 5, of genus 2 curves whose Jacobian varieties are $(n, n)$-split. For any $\varepsilon > 0$, we show that the rational points of weighted multiplicative height at most $B$ on the $\cL_n$ defined over $\Q$ are contained in an auxiliary hypersurface $\Y$ such that $\Y_n$ does not contain $\cL_n$ and the weighted degree of $\Y_n$ is bounded depending only on $B$, the weighted vector, the weighted degree of $\cL_n$ and $\varepsilon $. Moreover, we provide an explicit defining equation of $\Y_2$.
Andrew Obus, Baruch College
Tuesday, Dec 16, 16:00-17:00
Title: Mac Lane Valuations in Algebraic Geometry
Abstract: Almost 90 years ago, Mac Lane showed how to represent “geometric” valuations of rational function fields over discretely valued fields in a compact, explicit way, using only a list of polynomials and rational numbers. It turns out that this is also a useful way to represent models of the projective line over a discrete valuation ring, and to answer a number of questions in algebraic geometry related to such models. We will give an overview of the subject, and then discuss applications to desingularization of superelliptic curves.
Vasyl Ustimenko, Royal Holloway University of London
Wednesday, Dec 17, 10:00-11:00
Title: On random walks on temporal analogue of geometries of Kac-Moody groups and Post Quantum Cryptography
Abstract: Artificial Intelligence technology can be used for the construction of generators of pseudo-random or genuinely random sequences of elements from selected field. This technique can be used for the symbolic computation of walks in temporal analogue of geometries of Kac-Moody groups defined over finite fields. We use the semigroups of such walks for the construction of key exchange protocols for which the security is justified by the complexity of Conjugacy Power Problem or complexity of Navigation Problem and constructions of Multivariate Public Keys of algebraic nature.
Ilias Kotsireas, Wilfrid Laurier University, Ontario, Canada
Wednesday, Dec 17, 11:00-11:30
Title: Challenging Design Theory problems
Abstract: We shall explain some challenging Design Theory problems that we believe can be tackled with AI methods. These problems are defined via the unifying concept of the autocorrelation
Nadia El Mrabet
Wednesday, Dec 17, 11:30-12:00
Title: Optimising arithmetic for cryptography
Abstract: Mathematical problems ensure the robustness of cryptosystems and form the foundation of algorithms for cryptographic protocols. Optimizing implementations often involves optimizing the arithmetic computations that define the cryptosystem. For instance, in pairing-based cryptography, the main mathematical operations include modular arithmetic, point doubling and addition on elliptic curves, fast exponentiations, and computations of Frobenius maps and inverses. Finding decompositions that preserve correctness while minimizing computational complexity can be done manually, but it requires extensive manipulation of equations and algebraic expressions. Could artificial intelligence assist us in validating or discovering new expressions?
Lisa Carbone, Rutgers University
Wednesday, Dec 17, 14:00-15:00
Title: AI tools for infinite dimensional symmetry groups
Abstract: Lie group analogs for infinite dimensional Lie algebras are sophisticated mathematical structures, many of which encode symmetries in high-energy theoretical physics. Of particular interest are those groups associated to Borcherds (generalized Kac-Moody) algebras, particularly the Monster Lie algebra m. This Lie algebra, discovered by Borcherds, admits an action of the Monster finite simple group M and played an important role in the solution of part of the Conway-Norton Monstrous Moonshine conjecture. A Lie group analog for m has been recently constructed, completing a long-sought objective in the theory. However, the scale and complexity of this vast new structure pose significant computational and theoretical challenges. We discuss how various AI-driven approaches have been used to overcome these obstacles and to navigate this infinite dimensional landscape.
Sergey Khashin, Ivanovo University, Russia
Wednesday, Dec 17, 15:00-15:30
Title: Comparison of the efficiency of minimization methods zero and first order in neural networks
Abstract: To minimize the objective function in neural networks, first-order methods are usually used, which involve the repeated calculation of the gradient. The number of variables in modern neural networks can be many thousands and even millions. Numerous experiments show that the time of analytical calculation of the gradient of a function of N variables is approximately N/4 times longer than the time of calculation of the function itself. The possibility of using zero-order methods to minimize the function is considered. In particular, a new zero-order method for function minimization, descent over two-dimensional spaces, is proposed. The convergence rates of three different methods are compared: standard gradient descent with automatic step selection, coordinate descent with step selection for each coordinate, and descent over two-dimensional subspaces. It is shown that the efficiency of properly organized zero-order methods in the considered problems of training neural networks is not lower than the gradient ones.
Chia Zargeh, Modern College of Business and Science
Wednesday, Dec 17, 15:30-16:00
Groebner bases and label codes of lattices
Abstract: In this work, we review the label codes of root lattices. We provide a description of the label codes of some lattices using Groebner bases and binomial ideals. This presentation recalls the effectiveness of Groebner bases in the representations of lattice label codes.
Victoria Rayskin, Minnesota State University
Wednesday, Dec 17, 16:00-16:30
Title
Abstract: