Graphs and more Complex structures for Learning and Reasoning

Workshop at AAAI 2021

Schedule

The workshop is going to be held on 9th Feb virtually from 9 AM to 5:30 PM in Eastern Standard Time (UTC -5). We request the attendees to download the latest version of zoom. The detailed schedule is as follows:

09:00 - 09:40 Invited talk by Manlio De Domenico

Multilayer modeling of complex systems: from cells to societies

Complex systems are characterized by constituents -- from neurons in the brain to individuals in a social network -- which exhibit special structural organization and nonlinear dynamics. As a consequence, a complex system can not be understood by studying its units separately because their interactions lead to unexpected emerging phenomena, from collective behavior to phase transitions.

In the last decade, we have discovered that a new level of complexity characterizes a variety of natural and artificial systems, where units interact, simultaneously, in distinct ways. For instance, this is the case of multimodal transportation systems (e.g., metro, bus and train networks) or of social networks, whose interactions might be of different type (e.g. trust, trade, virtual, etc.).

The unprecedented newfound wealth of data allows to categorize system's interdependency by defining distinct "layers", each one encoding a different network representation of the system. The result is a multilayer network model.

In this talk we will discuss the most salient features of multilayer systems and how to determine their robustness, node versatility and mesoscale organization, with special attention to applications to empirical biological, socio-ecological and socio-technical networks. We will also discuss recent applications to systems medicine and infodemiology of COVID-19.


The recorded talk can be accessed from this link on Youtube.

09:40 - 10:20 Invited talk by Ginestra Bianconi

Information theory of networks

Information theory is one of the most fundamental theoretical frameworks of network science and machine learning. However, the current information theory frameworks for understanding networks, based on maximum entropy network ensembles, are not able to explain the emergence of heterogeneity in complex networks. Here, we fill this gap of knowledge by developing a information theoretical framework for networks based on finding a trade-off between the information content of a compressed representation of the ensemble and the information content of the actual network ensemble.


The recorded talk can be accessed from this link on Youtube.

10:20 - 11:00 Invited talk by Gesine Reinert

Anomaly detection in networks

Detecting fraud is a global challenge. This talk will mainly focus on financial and infrastructure transaction networks. There are many methods available to detect specific anomalies; this talk will present an approach for detecting unknown anomalies. To that purpose a strategy is used with derives features from network comparison methods and spectral analysis, and then a random forest method is applied to classify nodes as normal or anomalous. The method is tested on synthetic data as well as infrastructure data.

This talk is based on joint work with Andrew Elliott, Mihai Cucuringu, Milton Martinez Luaces and Paul Reidy.


The recorded talk can be accessed from this link on Youtube.

11:00 - 11:30 Poster flash presentations

More information about the accepted papers like videos or posters can be accessed from this link.

11:30 - 12:10 Invited talk by Anima Anandkumar

Infusing Structure and Domain Knowledge into Deep Learning

The deep-learning revolution has achieved impressive progress through the convergence of data, algorithms, and computing infrastructure. However, for further progress, we cannot solely rely on bigger models. We need to reduce our dependence on labeled data, and design algorithms that can incorporate more structure and domain knowledge. Examples include tensors, graphs, physical laws, and simulations. I will describe efficient frameworks that enable developers to easily prototype such models, e.g. Tensorly-torch to incorporate tensorized architectures. Compositionality is an important hallmark of intelligence. Humans are able to compose concepts to reason about entirely new scenarios. We have created a new dataset for few-shot learning, inspired by the Bongard challenge. We show that all existing meta learning methods severely fall short of human performance. We demonstrate that neuro-symbolic reasoning is critical for tackling such few-shot learning challenges.


The recorded talk can be accessed from this link on Youtube.

12:10 - 13:00 Lunch


13:00 - 13:40 Invited talk by Stephen Bach

Using Knowledge Graphs to Learn without Labels

How can we automatically exploit the information in common sense knowledge graphs to create classifiers for new concepts without labeled training examples? I will discuss our recent work on methods for incorporating knowledge graphs into zero-shot learning. We introduce ZSL-KG, a framework for identifying concepts represented as graph nodes without any examples for those concepts. ZSL-KG is based on a novel class of graph neural networks called transformer GCNs. These networks use non-linear, permutation-invariant aggregators based on self-attention to better capture the rich information in knowledge graphs. This framework is completely inductive, meaning that new nodes and edges can be added to the knowledge graph at test time to describe novel classes. On computer vision and natural language processing tasks, ZSL-KG significantly outperforms (+5 percentage points of accuracy on average) prior general-purpose, graph-based methods. It also outperforms specialized methods developed for specific tasks.


The recorded talk can be accessed from this link on Youtube.

13:40 - 14:20 Invited talk by William L. Hamilton

Graph Representation Learning: Recent Advances and Open Challenges

Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial if we want systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, most prominently in the development of graph neural networks (GNNs). Advances in GNNs have led to state-of-the-art results in numerous domains, including chemical synthesis, 3D-vision, recommender systems, question answering, and social network analysis. In the first part of this talk I will provide an overview and summary of recent progress in this fast-growing area, highlighting foundational methods and theoretical motivations. In the second part of this talk I will discuss fundamental limitations of the current GNN paradigm. Finally, I will conclude the talk by discussing recent progress my group has made in advancing graph representation learning beyond the GNN paradigm.


The recorded talk can be accessed from this link on Youtube.

14:20 - 15:00 Invited talk by Phil Chodrow

Hypergraph Clustering: From Blockmodels to Modularities

Hypergraph clustering is a natural framework for detecting modules of functionally similar entities in complex relational systems. In this talk, we propose a flexible approach to hypergraph clustering based on a generalization of the popular modularity functional for dyadic networks. We first introduce the degree-corrected hypergraph stochastic blockmodel (DCHSBM), a generative model for hypergraphs with heterogenous degree and dimension distributions. We then derive from the DCHSBM likelihood a family of hypergraph clustering objective functions, and derive a combinatorial identity enabling efficient computation of these objectives. Next, we formulate a generalization of the fast Louvain algorithm for finding high-quality partitions. This generalized Louvain algorithm allows us to mine hypergraphs up to 100,000 nodes, without resorting to dyadic projections. We also demonstrate that modularity-based hypergraph clustering can succeed in cases where approaches based on dyadic projections are guaranteed to fail due to information-theoretic limits.


The recorded talk can be accessed from this link on Youtube.

15:00 - 15:40 Invited talk by Ines Chami

Hyperbolic Embeddings and a Knowledge Graph Application


Graph embedding methods aim at learning representations of nodes that preserve graph properties (e.g. graph distances). These embeddings can then be used in downstream applications such as recommendation systems. Most machine learning algorithms learn embeddings into the standard Euclidean space. Recent research shows promise for more faithful embeddings by leveraging non-Euclidean geometries, such as hyperbolic or spherical geometries. In particular, trees can be embedded almost perfectly into hyperbolic space, while this is not possible in standard Euclidean space. In this talk, we review basic notions of hyperbolic geometry and then go over machine learning algorithms that learn embeddings into hyperbolic space. We cover a recent application of such embeddings, namely link prediction in Knowledge Graphs.


The recorded talk can be accessed from this link on Youtube.

15:40 - 15:50 Coffee break


15:50 - 16:30 Poster presentation of accepted papers

More information about the accepted papers like videos or posters can be accessed from this link.

16:30 - 17:30 Panel discussion

It can be accessed from this link on Youtube.

More information about the accepted papers like videos or posters can be accessed from this link.