AATRN Networks Seminar

About: A regular online seminar covering how network structure is revealed through topology and algebra as well as applications of these to machine learning and other sciences. Topics covered in the seminar include (but are not limited to) homology theories for digraphs, algebras generated by digraphs and representation theory,  applied category theory inspired by networks, TDA tools in networks, dynamics on networks, applications on machine learning and computational neuroscience, random graphs and complexes, and topological interpretations of graphs and hypergraphs. 

Organizers: In alphabetical order by last name are Daniela Egas Santander, Henri Riihimäki, and Jason P. Smith. The Networks seminar is hosted under the Applied Algebraic Topology Research Network (AATRN). More information about this organization is available at https://www.aatrn.net

Google group: Used to announce talks, emails sent once a month. To join, follow the link and click "Ask to join group":  https://groups.google.com/g/aatrn-networks 

Schedule: Seminars normally occur on the first Tuesday of every month, at 17:00 CET.

Talks and recordings: Our archive and planned talks are listed below, and are available on the AATRN Networks YouTube playlist. Title of the talk links directly to the recording. All the talks of other AATRN seminars are also available on the AATRN Youtube profile.

Upcoming talks:

Past talks:

Predicting neural network dynamics from connectivity: a graph-theoretic and topological approach. Neural networks often exhibit complex patterns of activity that are shaped by the intrinsic structure of the network.  For example, spontaneous sequences of neural activity have been observed in cortex and hippocampus, and patterned motor activity arises in central pattern generators for locomotion.  We focus on a simplified neural network model known as Combinatorial Threshold-Linear Networks (CTLNs) in order to understand how the pattern of neural connectivity, as encoded by a directed graph, shapes the emergent nonlinear dynamics of the network.  It has previously been shown that important aspects of these dynamics are controlled by the collection of stable and unstable fixed points of the network.  In this talk, we highlight two different methods using covers of the connectivity graph to better understand the fixed points as well as the dynamics more broadly.  These graph covers provide insight into network dynamics via either (1) the structure of the cover, e.g. its nerve, or (2) via the fixed points of the component subnetworks, which can be “glued” together to yield the fixed points of the full network.  Both of these methods provide a significant dimensionality reduction of the network, giving insight into the emergent dynamics and how they are shaped by the network connectivity.

Data Analysis Using Zigzag Persistence. Temporal hypergraphs are a powerful tool for modeling complex systems with multi-way interactions and temporal dynamics. However, existing tools for studying temporal hypergraphs do not adequately capture the evolution of their topological structure over time. In this work, we leverage zigzag persistence from Topological Data Analysis (TDA) to study the topological evolution of time-evolving graphs and hypergraphs. We apply our pipeline to several datasets including cyber security and social network datasets and show how the topological structure of their temporal hypergraph representations can be used to understand the underlying dynamics.

Groebner methods and magnitude homology. In this talk we show how to apply the framework developed by Sam and Snowden to study structural properties (eg. bound on rank and order of torsion) of graph homologies, in the spirit of Ramos, Miyata and Proudfoot. In particular, we focus on magnitude homology for graphs, which was introduced by Hepworth and Willerton. The talk is organised as follows: we start with a short introduction to modules over categories and to the theory of Groebner categories. Then, we introduce magnitude homology and see some examples. Finally, we will see how to use the theory of Groebner categories to obtain information on magnitude (co)homology. 

Science as branched flow: A case study in citation disparities. Science is a beautiful rational process of highly structured inquiry that allows us to learn more about our world. By it, we see past old theories, and build new ones. We realize a phenomenon occurs because of this, and not that. Perennially the skeptic, we spar with our own internal models of how things might happen: always questioning, ever critical, rarely certain. What if we were to turn this audacious questioning towards—not science—but how we do science? Not broadly a natural phenomenon but more specifically a human phenomenon? This query is precisely what drives the field of the science of science. How does science happen? How do we choose scientific questions to pursue? How do we map fields of inquiry? How do we determine where the frontiers are, and then step beyond them? In this talk, I will canvas this broader research agenda while foregrounding recent advances at the intersection of science of science, machine learning, and big data. Along the way, I’ll uncover gender, racial, and ethnic inequalities in the most obvious of places (the demographics of scientists) and also in the most unexpected and out-of-the-way places (the reference list of journal articles). I will consider what these data mean for the way we think about science—for our theories of what science is. What opportunities might we have to see past old theories and build a new one? What possibilities to lay down a new praxis for a science of tomorrow?

A Sheaf-based Approach to Graph Neural Networks. The multitude of applications where data is attached to spaces with non-Euclidean structure has driven the rise of the field of Geometric Deep Learning (GDL). Nonetheless, from many points of view, geometry does not always provide the right level of abstraction to study all the spaces that commonly emerge in such settings. For instance, graphs, by far the most prevalent type of space in GDL, do not even have a geometrical structure in the strict sense. In this talk, I will explore how we can take a sheaf-theoretic perspective of the field with a focus on understanding and developing new Graph Neural Network models.