About: A regular online seminar covering how network structure is revealed through topology and algebra as well as applications of these to machine learning and other sciences. Topics covered in the seminar include (but are not limited to) homology theories for digraphs, algebras generated by digraphs and representation theory, applied category theory inspired by networks, TDA tools in networks, dynamics on networks, applications on machine learning and computational neuroscience, random graphs and complexes, and topological interpretations of graphs and hypergraphs.
Organizers: In alphabetical order by last name are Daniela Egas Santander, Henri Riihimäki, and Jason P. Smith. The Networks seminar is hosted under the Applied Algebraic Topology Research Network (AATRN). More information about this organization is available at https://www.aatrn.net.
Google group: Used to announce talks, emails sent once a month. To join, follow the link and click "Ask to join group": https://groups.google.com/g/aatrn-networks
Schedule: Seminars normally occur on the first Tuesday of every month, at 17:00 CET.
Talks and recordings: Our archive and planned talks are listed below, and are available on the AATRN Networks YouTube playlist. Title of the talk links directly to the recording. All the talks of other AATRN seminars are also available on the AATRN Youtube profile.
Upcoming talks:
May 5, 2026, Arvind Kumar, KTH
Effect of second order connectivity motifs on structure and activity of biological neural networks. How the structure of network connectivity shapes the network activity is a classical problem in neuroscience. The network structure in the brain depends on the spatial scale of our observations and brain regions. Traditionally it was assumed that within 1mm distance connectivity can be approximated as an Erdös-Renyi type network. However, accumulating data now shows that just pairwise connection probability is not enough to capture the network structure even at microscopic scales. That is, we should consider higher-order statistics of connectivity. For instance, we should count the distribution of different types of subnetworks involving 3 neurons. Data suggests that 3-neuron motifs such as convergent, divergent and chain type motifs are either under- or overrepresented in cortical connectivity. Because in 3-neuron motifs we count joint probability of two connections, these motifs are also referred to as second-order motifs. In my talk I will focus on second-order motifs and discuss how overrepresentation of such motifs may affect the network activity dynamics. Using numerical simulations of biological neural networks we found that second-order motifs (chain and convergent) among excitatory neurons induce very high synchrony which is inconsistent with the data. Next, we found that the hypersynchrony induced by excitatory motifs can be quenched by inhibitory motifs. To better understand the emergence of synchrony and quenching of synchrony by second-order motifs we measured several global properties of the whole network. This analysis revealed that second-order motifs lead to a heavy-tailed degree distribution and in- and out-degrees of neurons become correlated. That is, these motifs render a network with heterogeneous connectivity. This observation not only provided a natural explanation of how second-order motifs induce synchrony but also suggested a better way to think of the impact of higher-order motifs on network activity. Finally, I will discuss the consequence of connection heterogeneity in networks where neurons are wired in a distance-dependent manner. In particular, I will show that connection heterogeneity results in several hot-spots in the network whose stimulation can be used to control network activity.
June 2, 2026, Alan Veliz-Cuba, University of Dayton
July 7, 2026, Antonio Rieser, CIMAT
Past talks:
March 3, 2026, Daniel Hernández Serrano, Universidad de Salamanca
From Persistence to Resilience: TDA for Measuring Robustness in Simplicial Complex Networks. Persistent homology is a central tool in Topological Data Analysis, yet it does not quantify how fragile or structurally refined homological cycles are. In particular, classical Betti numbers ignore the dimensions of the simplices generating these cycles, as well as the higher-order adjacency relations among them. In this work, we investigate the robustness of simplicial networks by analyzing cycle thickness and cohesiveness under random failures and targeted attacks. Inspired by persistent homology, we introduce new filtrations based on simplicial elimination rules, leading to two new invariants: thick Betti numbers and cohesive Betti numbers. These invariants provide a refined characterization of homological features, quantifying link thickness and connection strength within cycles. Finally, we show that resilience to simplicial attacks can be systematically studied through biparameter persistence modules, where one parameter models the attack process and the other captures structural refinement via thickness or cohesiveness.
February 3, 2026, Manuel Lecha, Italian Institute of Technology and Oxford University
Learning with Directed Higher-Order Structure: Applications to Brain Stimulus Classification. Graph Neural Networks learn representations from graph-structured data by exploiting relational and locality-based inductive biases, with successful applications in chemistry, social networks, and neuroscience. Beyond pairwise interactions, higher-order learning frameworks leverage richer adjacency structures induced by higher-order constructions—such as simplicial or cell complexes—to propagate information over signals supported on higher-order entities. In this talk, we focus on directed higher-order structures to design more expressive inductive biases for neural architectures. Since neural systems exhibit inherently directional and higher-order interaction patterns arising from asymmetric synaptic transmission and co-activation dynamics, directed topological message passing provides a principled framework for effective brain stimulus classification.
December 9, 2025, Luigi Caputi, University of Bologna
Eulerian magnitude homology: diagonality, injective words, and regular path homology. After recalling definition and first properties of (Eulerian) magnitude homology, we shall focus on some algebraic and topological properties. From one side, we shall show that Eulerian magnitude homology of graphs detects the complete graph. Then, we describe the regular magnitude-path spectral sequence, that is the spectral sequence of the (filtered) injective nerve of the reachability category, and explore some applications to regular path homology. This is joint work with Giuliamaria Menara.
November 4, 2025, Evan Patterson, Topos Institute
Networks as free categorical structures. In most types of networks, the paths between nodes, not merely the edges, are meaningful. Whenever that's the case, it can be useful to view the network as a freely generated categorical structure. We illustrate this phenomenon through a range of examples, some well known and others less so, and explain how it gives a clean mathematical account of operations on networks like motif finding and refinement. We then show how we're putting these ideas to work in CatColab, an interactive environment for modeling in domain-specific categorical logics.
October 7, 2025, Hubert Wagner, University of Florida
Beyond the shape of data: detecting spatial interactions in high-dimensional datasets. Topological data analysis focuses on detecting the shape of data. Motivated by problems related to training of deep learning models, we are interested in characterizing spatial interactions of two or more subsets of data. Such a task requires new tools, and we show one tool we call a mixup barcode. Based on joint work with N. Arustamyan, M. Wheeler and P. Bubenik.
July 1, 2025, Massimo Ferri, University of Bologna
Steady and ranging: persistence without homology. One of the extensions of persistent homology is the one of "persistence functions", started with a 2020 paper by M. Bergomi and P. Vertechi. One way of producing persistence functions is by "steady" and "ranging" sets. This technique was first applied to graphs and digraphs and has been recently extended to a wide context, including hypergraphs. Some results on the stability of steady and ranging persistence will be exposed and discussed.
June 3, 2025, Carles Casacuberta, University of Barcelona
A chordless cycle filtration scheme in topological data analysis for complex network dimensionality detection. Many complex networks, ranging from social to biological systems, exhibit structural patterns consistent with an underlying hyperbolic geometry. The dimensionality of this latent space is surprisingly low for most real-world networks. We introduce a novel topological data analysis weighting scheme for graphs, based on chordless cycles, aimed to estimate the dimensionality of complex networks in a data-driven way. We further show that the resulting descriptors can effectively estimate network dimensionality using a neural network architecture trained on a synthetic graph database constructed for this purpose. This is joint work with Aina Ferrà, Robert Jankowski, Meritxell Vila, and Mariàngels Serrano.
May 6, 2025, Chris Kapulkin, Western University
An invitation to discrete homotopy theory. Discrete homotopy theory, introduced around 20 years ago by Barcelo and collaborators building on the work of Atkin from the mid-seventies, is a homotopy theory of (simple) graphs. As such, it applies techniques previously employed in the "continuous" context to study discrete objects. It has found applications both within and outside mathematics, including: matroid theory, hyperplane arrangements, topological data analysis, time series analysis, and quite concretely in understanding how social interactions between preschoolers impact their academic performance. Recently, discrete homotopy theory has seen remarkable progress leading to proofs of several longstanding conjectures and new applications. This talk will be an introduction to discrete homotopy theory, highlighting some of the recent advances, of both theoretical and computational nature, and applications. These include the resolution of the conjecture of Babson, Barcelo, de Longueville, and Laubenbacher from 2006 (j/w Carranza, Compos. Math., 2024) and an efficient algorithm for computing discrete homology groups (j/w Kershaw, arXiv:2410.09939).
April 1, 2025, Lina Fajardo Gómez, University of South Florida
Prodsimplicial Complexes and Applications to Word Reduction Pathways. We propose custom made cell complexes, in particular prodsimplicial complexes, in order to analyze pathways in directed graphs. The complexes are constructed by attaching cells that correspond to products of simplices and are best suited to study data of acyclic directed graphs. We apply these tools to directed graphs associated with reductions of double occurrence words and study the effects of word operations on the homology for the corresponding graphs.
March 4, 2025, Karel Devriendt, University of Oxford
Spanning trees, effective resistances and curvature on graphs. Kirchhoff's celebrated matrix tree theorem expresses the number of spanning trees of a graph as the maximal minor of the Laplacian matrix of the graph. In modern language, this determinantal counting formula reflects the fact that spanning trees form a regular matroid. In this talk, I will give a short historical overview of the tree-counting problem and a related quantity from electrical circuit theory: the effective resistance. I will describe a characterization of effective resistances in terms of a certain polytope and discuss some recent applications to discrete notions of curvature on graphs. More details can be found in the preprint: https://arxiv.org/abs/2410.07756
February 4, 2025, Chunyin Siu, Stanford University
Homology and Homotopy Properties of Scale-Free Networks. Many real-world networks are believed to be scale-free. We study the random model of preferential attachment for such networks. Our results show that preferential attachment favors higher-order connectivity, in the sense that it drives the growth of Betti numbers in the finite-graph setting, and it annihilates homotopy groups in the infinite-graph setting. More precisely, we determined the high-probability growth rates of the Betti numbers of the clique complexes of finite preferential attachment graphs, as well as the sharp threshold at which the infinite clique complex becomes homotopy-connected almost surely. This is joint work with Gennady Samorodnitsky, Christina Lee Yu, and Rongyi He. The talk is based on the preprints [https://arxiv.org/abs/2305.11259] and [https://arxiv.org/abs/2406.17619].
December 3, 2024, Maxime Lucas, Université de Namur
Revealing patterns in brain activity with persistent homology. The brain is a notoriously complex system and its activity can exhibit a wide range of complex dynamics. We will discuss how persistent homology can help discriminate and describe these dynamics and associated biological conditions that may induce them. In particular, we will talk about two cases: brain recordings (1) in human hypnosis experiments and (2) in epileptic fish. We first apply time-delay embedding to the time series to reconstruct the associated dynamical attractor, and then apply persistent homology. In both cases, persistent homology and topological indicators associated to it allow us to unveil how the brain dynamics is affected by biological factors (e.g. hypnotic susceptibility in the first case, or fishline and genetic mutation in the second). In turn, these topological indicators may have the potential to serve as markers for these biological factors and be useful to practitioners.
November 5, 2024, Emily Roff, University of Edinburgh
Homotopy by degrees, and the magnitude-path spectral sequence. The past decade has seen a proliferation of homology theories for graphs. In particular, large literatures have grown up around magnitude homology (due to Hepworth and Willerton) and path homology (Grigor’yan, Lin, Muranov and Yau). Though their origins are quite separate, Asao proved in 2022 that in fact these homology theories are intimately related. To every directed graph one can associate a certain spectral sequence - the magnitude-path spectral sequence, or MPSS - whose page E^1 is exactly magnitude homology, while path homology lies along a single axis of page E^2. In this talk, based on joint work with Richard Hepworth, I will describe the construction of the sequence and argue that each one of its pages deserves to be regarded as a homology theory for directed graphs, satisfying a Künneth theorem and an excision theorem, and with a homotopy-invariance property that grows stronger as we turn the pages of the sequence.
October 1, 2024, Marco Nurisso, Politecnico di Torino
Interactions and topological synchronization in the simplicial Kuramoto model. Simplicial Kuramoto models have emerged as a diverse and intriguing class of models that capture the dynamics of interacting oscillators placed on the simplices of a simplicial complex. Being formalized with the tools of discrete differential geometry, these models reveal interesting relationships between topology, geometry and dynamics. We leverage their mathematical structure to give a microscopic interpretation of the interaction terms which, we see, include effectively both higher-order and self interactions. This naturally leads us to establish an equivalence between the simplicial Kuramoto model and the standard Kuramoto model on pairwise networks under the condition of the underlying simplicial complex being a pseudomanifold. Then, we describe the notion of simplicial synchronization, its relation to simplicial homology, and derive bounds on the oscillators’ coupling strength necessary or sufficient for achieving it.
June 4, 2024: Markus Reineke, Ruhr-Universität Bochum
Quiver representations and neural networks. Quiver representations formalize classification problems of linear algebra. We will review some basic concepts and aims of the representation theory of quivers, in particular the algebraic-geometric approach via quiver moduli spaces. We will model (certain aspects of) neural networks using the quiver language, and discuss what quiver representation theory can say, both qualitatively and quantitatively, about the space of network functions.
May 7, 2024: Sarah Percival, Michigan State University
Bounding the Interleaving Distance for Mapper Graphs with a Loss Function. Data consisting of a graph with a function mapping into R^d arise in many data applications, encompassing structures such as Reeb graphs, geometric graphs, and knot embeddings. As such, the ability to compare and cluster such objects is required in a data analysis pipeline, leading to a need for distances between them. In this work, we study the interleaving distance on discretization of these objects, R^d-mapper graphs, where functor representations of the data can be compared by finding pairs of natural transformations between them. However, in many cases, computation of the interleaving distance is NP-hard. For this reason, we take inspiration from recent work by Robinson to find quality measures for families of maps that do not rise to the level of a natural transformation, called assignments. We then endow the functor images with the extra structure of a metric space and define a loss function which measures how far an assignment is from making the required diagrams of an interleaving commute. Finally, we show that the computation of the loss function is polynomial with a given assignment. We believe this idea is both powerful and translatable, with the potential to provide approximations and bounds on interleavings in a broad array of contexts.
April 2, 2024: Katie Morrison, University of Northern Colorado
Predicting neural network dynamics from connectivity: a graph-theoretic and topological approach. Neural networks often exhibit complex patterns of activity that are shaped by the intrinsic structure of the network. For example, spontaneous sequences of neural activity have been observed in cortex and hippocampus, and patterned motor activity arises in central pattern generators for locomotion. We focus on a simplified neural network model known as Combinatorial Threshold-Linear Networks (CTLNs) in order to understand how the pattern of neural connectivity, as encoded by a directed graph, shapes the emergent nonlinear dynamics of the network. It has previously been shown that important aspects of these dynamics are controlled by the collection of stable and unstable fixed points of the network. In this talk, we highlight two different methods using covers of the connectivity graph to better understand the fixed points as well as the dynamics more broadly. These graph covers provide insight into network dynamics via either (1) the structure of the cover, e.g. its nerve, or (2) via the fixed points of the component subnetworks, which can be “glued” together to yield the fixed points of the full network. Both of these methods provide a significant dimensionality reduction of the network, giving insight into the emergent dynamics and how they are shaped by the network connectivity.
March 5, 2024: Audun Myers, Pacific Northwest National Laboratory
Data Analysis Using Zigzag Persistence. Temporal hypergraphs are a powerful tool for modeling complex systems with multi-way interactions and temporal dynamics. However, existing tools for studying temporal hypergraphs do not adequately capture the evolution of their topological structure over time. In this work, we leverage zigzag persistence from Topological Data Analysis (TDA) to study the topological evolution of time-evolving graphs and hypergraphs. We apply our pipeline to several datasets including cyber security and social network datasets and show how the topological structure of their temporal hypergraph representations can be used to understand the underlying dynamics.
February 13, 2024: Carlo Collari, University of Pisa
Groebner methods and magnitude homology. In this talk we show how to apply the framework developed by Sam and Snowden to study structural properties (eg. bound on rank and order of torsion) of graph homologies, in the spirit of Ramos, Miyata and Proudfoot. In particular, we focus on magnitude homology for graphs, which was introduced by Hepworth and Willerton. The talk is organised as follows: we start with a short introduction to modules over categories and to the theory of Groebner categories. Then, we introduce magnitude homology and see some examples. Finally, we will see how to use the theory of Groebner categories to obtain information on magnitude (co)homology.
December 5, 2023: Dani Bassett, University of Pennsylvania
Science as branched flow: A case study in citation disparities. Science is a beautiful rational process of highly structured inquiry that allows us to learn more about our world. By it, we see past old theories, and build new ones. We realize a phenomenon occurs because of this, and not that. Perennially the skeptic, we spar with our own internal models of how things might happen: always questioning, ever critical, rarely certain. What if we were to turn this audacious questioning towards—not science—but how we do science? Not broadly a natural phenomenon but more specifically a human phenomenon? This query is precisely what drives the field of the science of science. How does science happen? How do we choose scientific questions to pursue? How do we map fields of inquiry? How do we determine where the frontiers are, and then step beyond them? In this talk, I will canvas this broader research agenda while foregrounding recent advances at the intersection of science of science, machine learning, and big data. Along the way, I’ll uncover gender, racial, and ethnic inequalities in the most obvious of places (the demographics of scientists) and also in the most unexpected and out-of-the-way places (the reference list of journal articles). I will consider what these data mean for the way we think about science—for our theories of what science is. What opportunities might we have to see past old theories and build a new one? What possibilities to lay down a new praxis for a science of tomorrow?
November 7, 2023: Cristian Bodnar, Microsoft.
A Sheaf-based Approach to Graph Neural Networks. The multitude of applications where data is attached to spaces with non-Euclidean structure has driven the rise of the field of Geometric Deep Learning (GDL). Nonetheless, from many points of view, geometry does not always provide the right level of abstraction to study all the spaces that commonly emerge in such settings. For instance, graphs, by far the most prevalent type of space in GDL, do not even have a geometrical structure in the strict sense. In this talk, I will explore how we can take a sheaf-theoretic perspective of the field with a focus on understanding and developing new Graph Neural Network models.