About: A regular online seminar covering how network structure is revealed through topology and algebra as well as applications of these to machine learning and other sciences. Topics covered in the seminar include (but are not limited to) homology theories for digraphs, algebras generated by digraphs and representation theory, applied category theory inspired by networks, TDA tools in networks, dynamics on networks, applications on machine learning and computational neuroscience, random graphs and complexes, and topological interpretations of graphs and hypergraphs.
Organizers: In alphabetical order by last name are Daniela Egas Santander, Henri Riihimäki, and Jason P. Smith. The Networks seminar is hosted under the Applied Algebraic Topology Research Network (AATRN). More information about this organization is available at https://www.aatrn.net.
Google group: Used to announce talks, emails sent once a month. To join, follow the link and click "Ask to join group": https://groups.google.com/g/aatrn-networks
Schedule: Seminars normally occur on the first Tuesday of every month, at 17:00 CET.
Talks and recordings: Our archive and planned talks are listed below, and are available on the AATRN Networks YouTube playlist. Title of the talk links directly to the recording. All the talks of other AATRN seminars are also available on the AATRN Youtube profile.
Upcoming talks:
May 6, 2025, Chris Kapulkin, Western University
An invitation to discrete homotopy theory. Discrete homotopy theory, introduced around 20 years ago by Barcelo and collaborators building on the work of Atkin from the mid-seventies, is a homotopy theory of (simple) graphs. As such, it applies techniques previously employed in the "continuous" context to study discrete objects. It has found applications both within and outside mathematics, including: matroid theory, hyperplane arrangements, topological data analysis, time series analysis, and quite concretely in understanding how social interactions between preschoolers impact their academic performance. Recently, discrete homotopy theory has seen remarkable progress leading to proofs of several longstanding conjectures and new applications. This talk will be an introduction to discrete homotopy theory, highlighting some of the recent advances, of both theoretical and computational nature, and applications. These include the resolution of the conjecture of Babson, Barcelo, de Longueville, and Laubenbacher from 2006 (j/w Carranza, Compos. Math., 2024) and an efficient algorithm for computing discrete homology groups (j/w Kershaw, arXiv:2410.09939).
June 3, 2025, Carles Casacuberta, University of Barcelona
July 1, 2025, Massimo Ferri, University of Bologna
October 7, 2025, Hubert Wagner, University of Florida
November 4, 2025, Evan Patterson, Topos Institute
Past talks:
April 1, 2025, Lina Fajardo Gómez, University of South Florida
Prodsimplicial Complexes and Applications to Word Reduction Pathways. We propose custom made cell complexes, in particular prodsimplicial complexes, in order to analyze pathways in directed graphs. The complexes are constructed by attaching cells that correspond to products of simplices and are best suited to study data of acyclic directed graphs. We apply these tools to directed graphs associated with reductions of double occurrence words and study the effects of word operations on the homology for the corresponding graphs.
March 4, 2025, Karel Devriendt, University of Oxford
Spanning trees, effective resistances and curvature on graphs. Kirchhoff's celebrated matrix tree theorem expresses the number of spanning trees of a graph as the maximal minor of the Laplacian matrix of the graph. In modern language, this determinantal counting formula reflects the fact that spanning trees form a regular matroid. In this talk, I will give a short historical overview of the tree-counting problem and a related quantity from electrical circuit theory: the effective resistance. I will describe a characterization of effective resistances in terms of a certain polytope and discuss some recent applications to discrete notions of curvature on graphs. More details can be found in the preprint: https://arxiv.org/abs/2410.07756
February 4, 2025, Chunyin Siu, Stanford University
Homology and Homotopy Properties of Scale-Free Networks. Many real-world networks are believed to be scale-free. We study the random model of preferential attachment for such networks. Our results show that preferential attachment favors higher-order connectivity, in the sense that it drives the growth of Betti numbers in the finite-graph setting, and it annihilates homotopy groups in the infinite-graph setting. More precisely, we determined the high-probability growth rates of the Betti numbers of the clique complexes of finite preferential attachment graphs, as well as the sharp threshold at which the infinite clique complex becomes homotopy-connected almost surely. This is joint work with Gennady Samorodnitsky, Christina Lee Yu, and Rongyi He. The talk is based on the preprints [https://arxiv.org/abs/2305.11259] and [https://arxiv.org/abs/2406.17619].
December 3, 2024, Maxime Lucas, Université de Namur
Revealing patterns in brain activity with persistent homology. The brain is a notoriously complex system and its activity can exhibit a wide range of complex dynamics. We will discuss how persistent homology can help discriminate and describe these dynamics and associated biological conditions that may induce them. In particular, we will talk about two cases: brain recordings (1) in human hypnosis experiments and (2) in epileptic fish. We first apply time-delay embedding to the time series to reconstruct the associated dynamical attractor, and then apply persistent homology. In both cases, persistent homology and topological indicators associated to it allow us to unveil how the brain dynamics is affected by biological factors (e.g. hypnotic susceptibility in the first case, or fishline and genetic mutation in the second). In turn, these topological indicators may have the potential to serve as markers for these biological factors and be useful to practitioners.
November 5, 2024, Emily Roff, University of Edinburgh
Homotopy by degrees, and the magnitude-path spectral sequence. The past decade has seen a proliferation of homology theories for graphs. In particular, large literatures have grown up around magnitude homology (due to Hepworth and Willerton) and path homology (Grigor’yan, Lin, Muranov and Yau). Though their origins are quite separate, Asao proved in 2022 that in fact these homology theories are intimately related. To every directed graph one can associate a certain spectral sequence - the magnitude-path spectral sequence, or MPSS - whose page E^1 is exactly magnitude homology, while path homology lies along a single axis of page E^2. In this talk, based on joint work with Richard Hepworth, I will describe the construction of the sequence and argue that each one of its pages deserves to be regarded as a homology theory for directed graphs, satisfying a Künneth theorem and an excision theorem, and with a homotopy-invariance property that grows stronger as we turn the pages of the sequence.
October 1, 2024, Marco Nurisso, Politecnico di Torino
Interactions and topological synchronization in the simplicial Kuramoto model. Simplicial Kuramoto models have emerged as a diverse and intriguing class of models that capture the dynamics of interacting oscillators placed on the simplices of a simplicial complex. Being formalized with the tools of discrete differential geometry, these models reveal interesting relationships between topology, geometry and dynamics. We leverage their mathematical structure to give a microscopic interpretation of the interaction terms which, we see, include effectively both higher-order and self interactions. This naturally leads us to establish an equivalence between the simplicial Kuramoto model and the standard Kuramoto model on pairwise networks under the condition of the underlying simplicial complex being a pseudomanifold. Then, we describe the notion of simplicial synchronization, its relation to simplicial homology, and derive bounds on the oscillators’ coupling strength necessary or sufficient for achieving it.
June 4, 2024: Markus Reineke, Ruhr-Universität Bochum
Quiver representations and neural networks. Quiver representations formalize classification problems of linear algebra. We will review some basic concepts and aims of the representation theory of quivers, in particular the algebraic-geometric approach via quiver moduli spaces. We will model (certain aspects of) neural networks using the quiver language, and discuss what quiver representation theory can say, both qualitatively and quantitatively, about the space of network functions.
May 7, 2024: Sarah Percival, Michigan State University
Bounding the Interleaving Distance for Mapper Graphs with a Loss Function. Data consisting of a graph with a function mapping into R^d arise in many data applications, encompassing structures such as Reeb graphs, geometric graphs, and knot embeddings. As such, the ability to compare and cluster such objects is required in a data analysis pipeline, leading to a need for distances between them. In this work, we study the interleaving distance on discretization of these objects, R^d-mapper graphs, where functor representations of the data can be compared by finding pairs of natural transformations between them. However, in many cases, computation of the interleaving distance is NP-hard. For this reason, we take inspiration from recent work by Robinson to find quality measures for families of maps that do not rise to the level of a natural transformation, called assignments. We then endow the functor images with the extra structure of a metric space and define a loss function which measures how far an assignment is from making the required diagrams of an interleaving commute. Finally, we show that the computation of the loss function is polynomial with a given assignment. We believe this idea is both powerful and translatable, with the potential to provide approximations and bounds on interleavings in a broad array of contexts.
April 2, 2024: Katie Morrison, University of Northern Colorado
Predicting neural network dynamics from connectivity: a graph-theoretic and topological approach. Neural networks often exhibit complex patterns of activity that are shaped by the intrinsic structure of the network. For example, spontaneous sequences of neural activity have been observed in cortex and hippocampus, and patterned motor activity arises in central pattern generators for locomotion. We focus on a simplified neural network model known as Combinatorial Threshold-Linear Networks (CTLNs) in order to understand how the pattern of neural connectivity, as encoded by a directed graph, shapes the emergent nonlinear dynamics of the network. It has previously been shown that important aspects of these dynamics are controlled by the collection of stable and unstable fixed points of the network. In this talk, we highlight two different methods using covers of the connectivity graph to better understand the fixed points as well as the dynamics more broadly. These graph covers provide insight into network dynamics via either (1) the structure of the cover, e.g. its nerve, or (2) via the fixed points of the component subnetworks, which can be “glued” together to yield the fixed points of the full network. Both of these methods provide a significant dimensionality reduction of the network, giving insight into the emergent dynamics and how they are shaped by the network connectivity.
March 5, 2024: Audun Myers, Pacific Northwest National Laboratory
Data Analysis Using Zigzag Persistence. Temporal hypergraphs are a powerful tool for modeling complex systems with multi-way interactions and temporal dynamics. However, existing tools for studying temporal hypergraphs do not adequately capture the evolution of their topological structure over time. In this work, we leverage zigzag persistence from Topological Data Analysis (TDA) to study the topological evolution of time-evolving graphs and hypergraphs. We apply our pipeline to several datasets including cyber security and social network datasets and show how the topological structure of their temporal hypergraph representations can be used to understand the underlying dynamics.
February 13, 2024: Carlo Collari, University of Pisa
Groebner methods and magnitude homology. In this talk we show how to apply the framework developed by Sam and Snowden to study structural properties (eg. bound on rank and order of torsion) of graph homologies, in the spirit of Ramos, Miyata and Proudfoot. In particular, we focus on magnitude homology for graphs, which was introduced by Hepworth and Willerton. The talk is organised as follows: we start with a short introduction to modules over categories and to the theory of Groebner categories. Then, we introduce magnitude homology and see some examples. Finally, we will see how to use the theory of Groebner categories to obtain information on magnitude (co)homology.
December 5, 2023: Dani Bassett, University of Pennsylvania
Science as branched flow: A case study in citation disparities. Science is a beautiful rational process of highly structured inquiry that allows us to learn more about our world. By it, we see past old theories, and build new ones. We realize a phenomenon occurs because of this, and not that. Perennially the skeptic, we spar with our own internal models of how things might happen: always questioning, ever critical, rarely certain. What if we were to turn this audacious questioning towards—not science—but how we do science? Not broadly a natural phenomenon but more specifically a human phenomenon? This query is precisely what drives the field of the science of science. How does science happen? How do we choose scientific questions to pursue? How do we map fields of inquiry? How do we determine where the frontiers are, and then step beyond them? In this talk, I will canvas this broader research agenda while foregrounding recent advances at the intersection of science of science, machine learning, and big data. Along the way, I’ll uncover gender, racial, and ethnic inequalities in the most obvious of places (the demographics of scientists) and also in the most unexpected and out-of-the-way places (the reference list of journal articles). I will consider what these data mean for the way we think about science—for our theories of what science is. What opportunities might we have to see past old theories and build a new one? What possibilities to lay down a new praxis for a science of tomorrow?
November 7, 2023: Cristian Bodnar, Microsoft.
A Sheaf-based Approach to Graph Neural Networks. The multitude of applications where data is attached to spaces with non-Euclidean structure has driven the rise of the field of Geometric Deep Learning (GDL). Nonetheless, from many points of view, geometry does not always provide the right level of abstraction to study all the spaces that commonly emerge in such settings. For instance, graphs, by far the most prevalent type of space in GDL, do not even have a geometrical structure in the strict sense. In this talk, I will explore how we can take a sheaf-theoretic perspective of the field with a focus on understanding and developing new Graph Neural Network models.