Thanasis Bouganis, Durham University
Sarah Rees, University of Newcastle
Alex Fink, Queen Mary University London
Jiayi Li, MPI Dresden
Thursday 21st August 2025
Hybrid meeting.
In person: University of Edinburgh (Bayes Centre - Lecture Theater 5.10 )
On zoom: Click https://ed-ac-uk.zoom.us/j/4930560543?omn=86231573667 (Passcode: AAG@2025)
Dimitra Kosta, University of Edinburgh
If you wish to attend the meeting (even remotely), please register by sending an e-mail 📬 to Dimitra Kosta (D.Kosta@ed.ac.uk).
Thursday 21st August 2025
9:30-10:00 Welcome and coffee
10:00-11:00 Thanasis Bouganis
11:00-12:00 Sarah Rees
12:00-13:30 Lunch
13:30-14:30 Alex Fink
14:30-15:30 Jiayi Li (online)
18:00 Dinner
Thanasis Bouganis, Durham University
Title: On an analogue of the doubling method in coding theory.
Abstract: In this talk I will start by discussing some well known connections between coding theory and modular forms, and in particular the relation between Gleason's and Hecke's theorems. Then I will discuss recent joint work with Jolanta Marzec-Ballesteros on an analogue of the doubling method from the theory of higher-rank modular forms in coding theory. If time permits I will then explain how we can use our result to solve an analogue of the "basis problem'' in coding theory. That is, to express "cuspidal'' polynomials which are invariant under a Clifford-Weil type group as an explicit linear combination of higher-genus weight enumerators of self-dual codes of that type.
Alex Fink, Queen Mary University London
Title: More ways matroids are like algebraic varieties
Abstract: Matroids are fundamental combinatorial objects. One way to think about a matroid is as recording the combinatorics of which coordinates can vanish together on a linear subspace. Not all matroids come from linear spaces, but in a surprisingly rich collection of ways they behave like they do: matroids have properties that come from an algebro-geometric construction when the linear space exists, but that still hold when it isn't. Time permitting I'll talk about a few of these, but my target will be new work in progress with Eur and Larson stating some cohomology vanishing theorems that give a second proof of Speyer's f-vector conjecture.
Sarah Rees, University of Newcastle
Title: Geodesics and rewriting in Artin groups
Abstract: I'll talk about Artin groups. These are groups defined by their finite presentations, which belong to a very general class, containing groups with (apparently) quite a range of properties. Some of these groups (e.g. the braid groups) have quite natural geometric origins, but for most of them, very little geometric information is known. In particular I will talk about rewriting in these groups, the structure of geodesic words, and solution of word problems, referring to work of myself with Holt, of Blasco, Cumplido and Morris-Wright, and very recently of all 5 authors together. And I will examine how some rewriting techniques relate the Artin group to the associated Artin monoid. Looking at the relationship between the Deligne complexes of the Artin monoid M and the Artin group G, Boyd, Charney, Morris-Wright and I formed the conjecture that the Cayley graph Cay(G,M) of G over M (considered as a generating set) has nite diameter precisely when G has spherical type. Using the rewriting techniques that are now available for Artin groups we can prove the conjecture except for non-spherical G with all parabolic subgroups spherical.
Jiayi Li, MPI Dresden
Title: Geometry of Neural Networks with Algebraic Activations
Abstract: We consider neural networks with polynomial and rational activation functions. The choice of activation function in deep learning architectures is crucial for practical tasks and largely impacts the performance of a neural network. Leveraging tools from numerical algebraic geometry, we establish precise measures for the expressive power of neural networks with polynomial activation functions by studying the image of the parametrization map from weights to functions, which forms an irreducible algebraic variety upon taking closure. In addition, we study the optimization landscape of neural networks with algebraic activation functions and characterize the presence or absence of spurious critical points in the loss surface when activation coefficients are fixed versus trainable.
We are grateful for the financial support from the Isaac Newton Institute, the Glasgow Mathematical Journal Learning and Research Support Fund, from the Edinburgh Mathematical Society and the London Mathematical Society.