confirmed Speakers

Bridging AI, Game Theory and Complex Networks for making inferences

Click Here For The Abstract


Recent years have witnessed a rapid improvement in AI systems, resulting in numerous successful applications that are pervading and revolutionizing every aspect of our daily lives, able to reach human-comparable or super-human performances in several relevant tasks (e.g., computer vision, Natural Language Processing, etc.). Deep Learning (DL) has played a crucial role in this transformation. For instance, DL has allowed the effective integration of data-driven and knowledge-driven approaches, which is being increasingly recognized as key to improving processes and inferences in several applications (e.g., biomarker detection in several diseases).

Nevertheless, how deep neural networks encode knowledge about a specific task is still unclear. Indeed, these “black box” approaches still suffer from a lack of interpretability, which often prevents the applicability to real case studies.

Moreover, most of the recent successful developments of AI have been in the field of “narrow AI”, focused on a very limited set of tasks or goals (e.g., image interpretation, natural language processing, label classification, etc.) and strictly linked to the availability of huge datasets and computational power. Finally, another crucial issue is the lack of generalizability, robustness, causality and explainability.

The idea is to provide you with a new perspective on AI, combined with my interdisciplinary research on game theory and complex networks, and discuss how and what these methodologies can bring new insights and potential in the field of AI to make inferences.

A.DiStefano@tees.ac.uk

Analyzing time series networks using simplicial complexes


Click Here For The Abstract


Understanding the basic dynamic characteristics of a complex socio-economic system is of great importance for a number of issues, mostly connected with its governance. Nonlinear analyses of time series provide good methods for a diagnosis and for providing indications on the dynamical regime (stable, periodic, complex, chaotic). Here we apply a new “hybrid” approach (Chutani et al., 2020) in which a transformation of a time series into a network allows to use the powerful methods of network science and apply it to a few tourism systems (aka destinations). For each destination we start by collecting a time series of overnight stays and transform it into a network using the visibility graph algorithm (Lacasa et al., 2008). Further we apply the methods of algebraic topology, in particular simplicial complexes, and compute some metrics that allow us to identify the regime for the system. We compare the results with other studies in the literature finding a substantial agreement.

Chutani, M., Rao, N., Nirmal Thyagu, N., & Gupte, N. (2020). Characterizing the complexity of time series networks of dynamical systems: A simplicial approach. Chaos, 30(1), 013109.

Lacasa, L., Luque, B., Ballesteros, F., Luque, J., & Nuno, J. C. (2008). From time series to complex networks: The visibility graph. Proceedings of the National Academy of Sciences, 105(13), 4972-4975.

rodolfo.baggio@unibocconi.it

Weighted simplicial complexes and their representation power of higher-order network data and topology


Click Here For The Abstract


Hypergraphs and simplicial complexes both capture the higher-order interactions of complex systems, ranging from higher-order collaboration networks to brain networks. One open problem in the field is what should drive the choice of the adopted mathematical framework to describe higher-order networks starting from data of higher-order interactions. Unweighted simplicial complexes typically involve a loss of information of the data, though having the benefit to capture the higher-order topology of the data. In this work we show that weighted simplicial complexes allow to circumvent all the limitations of unweighted simplicial complexes to represent higher-order interactions. In particular, weighted simplicial complexes can represent higher-order networks without loss of information, allowing at the same time to capture the weighted topology of the data. The higher-order topology is probed by studying the spectral properties of suitably defined weighted Hodge Laplacians displaying a normalized spectrum. The higher-order spectrum of (weighted) normalized Hodge Laplacians is here studied combining cohomology theory with information theory. In the proposed framework, we quantify spectra of different dimension using higher-order spectral and compare the information content of higher-order entropies and spectral relative entropies. The proposed methodology is tested on real higher-order collaboration networks and on the weighted version of the simplicial complex model “Network Geometry with Flavor”.

federica.baccini@phd.unipi.it

Aware Decisions of Individuals and AI


Click Here For The Abstract


In a context characterized by the increasing availability of data on the one hand, and ever more complex and fast decisions on the other, it could be interesting to take a step in the direction of deepening how individuals could actually develop a personal self-awareness in order to effectively apply available data to make their decisions. It includes the time-honored problem of the contraposition and integration of analytical and intuitive knowledge, a theme investigated by philosophical, psychological, and decision theory approaches, acquiring in recent times a new centrality in foundational machine intelligence studies. Borrowing the definitions of concepts like awareness, tacit knowledge, emotion, and intuition from specialized fields of inquiry, they have studied behavioral patterns allowing one to understand some properties and mechanisms at the basis of the interaction between explicit and implicit knowledge, and between different individuals characterized as nodes of a network. We find that awareness emerges as a dynamic process allowing the decision-maker to switch from a habitual to an aware behavior, which results from a feedback mechanism of self-observation.

federico.bizzarri@student.unisi.it

Michael Lindner

Graph Neural Networks beat Network Science at Predicting Dynamics Stability of Power Grids

Click Here For The Abstract


A large body of work in network science studies the interplay of a network's topology with observed quantities of interest. Often network measures that are highly correlated with an observable are sought. For example, several centrality measures quantify the network's vulnerability to attacks at a certain node. Recently, Graph Neural Networks have shown great potential for network prediction tasks. They do not rely on explicitly defined network measures but implicitly learn node embeddings from the topology. We compare different predictive models of a highly nonlinear observable, namely dynamic stability of power grids as characterized by single node basin stability (SNBS) and survivability (SURV). We explicitly compute a large number of network measures that might be correlated to SNBS and SURV and provide them as inputs for a linear regression and a multi-layer perceptron. Their performance is then compared to Graph Neural Networks that only receive the network topology and the distribution of generators and loads in the power grids model as inputs. We study power grids of varying size as well as machine learning models with different numbers of trainable parameters and find a remarkable performance of Graph Neural Networks as compared to the more established approaches.


michaellindner@pik-potsdam.de

Inferring Origin-Destination Distribution in Complex Network using a Deep Learning Approach


Click Here For The Abstract


Predicting the origin-destination (OD) probability distribution of agent transfer is an important problem for managing complex systems. However, prediction accuracy of associated statistical estimators suffers from underdetermination. While specific techniques have been proposed to overcome this deficiency, there still lacks a general approach. Here, we propose a deep neural network framework with gated recurrent units (DNNGRU) to address this gap. Our DNNGRU is network-free, as it is trained by supervised learning with time-series data on the volume of agents passing through edges. We use it to investigate how network topologies affect OD prediction accuracy, where performance enhancement is observed to depend on the degree of overlap between paths taken by different ODs. By comparing against methods that give exact results, we demonstrate the near optimal performance of our DNNGRU, which we found to consistently outperform existing methods and alternative neural network architectures, under diverse data generation scenarios.

lockyue@ntu.edu.sg

Hypergraphs for multiscale cycles in structured data


Click Here For The Abstract


The characterisation and classification of data from complex spatial systems is a challenge of rapidly increasing importance. Even when restricted to the case of polygonal curves, the problem finds countless applications in the most varied scientific disciplines, ranging from finance, neuroscience, ecology, biophysics, and biology. Motivated by concrete examples arising in nature, I will introduce hyperTDA, a topological pipeline for analysing the structure of spatial curves that combines persistent homology, hypergraph theory, and network science. I will show that the method highlights important segments and structural units of the data. I will demonstrate hyperTDA on both simulated and experimental data. This is joint work with Agnese Barbensi, Christian Degnbol Madsen, Deborah O. Ajayi, Michael Stumpf, and Heather Harrington.

hee.yoon@maths.ox.ac.uk

The Dirac operator and its use to process topological signals


Click Here For The Abstract


Topological signals associated not only to nodes but also to links and to the higher dimensional simplices of simplicial complexes are attracting increasing interest in signal processing, machine learning and network science. Typically, topological signals of a given dimension are investigated and filtered using the corresponding higher-order Laplacian. In this talk, I will introduce the topological Dirac operator that can be used to process simultaneously topological signals of different dimensions. I will discuss the main spectral properties of the Dirac operator defined on networks, simplicial complexes and multiplex networks, and their relation to higher-order Laplacians. Finally I will show how the Dirac operator allows to perform signal processing of coupled topological signals of different dimension.

ginestra.bianconi@gmail.com

Parenclitic and Synolitic Networks


Click Here For The Abstract


Abstract: TBA


alexey.zaikin@ucl.ac.uk

Reservoir computing with a single driven pendulum


Click Here For The Abstract


The study of natural information processing capacity of a dynamical system has sought the attention of many researchers in the last few decades. Reservoir Computing (RC) provides a computational framework to exploit that. Various complex machine learning tasks may be performed using dynamical systems as the main computational substrate in RC. There are several examples of dynamical systems successfully used as a reservoir, chosen mainly on the basis of few usual criteria; high dimensionality, rich non-linearity and fading memory. In this talk, we will discuss the performance of a low dimensional dynamical system as a reservoir, namely a single driven pendulum. In the conventional neural network models also there is a notion of a single neuron being enough to perform complex tasks. Our objective is to exploit a strikingly simple system like a single pendulum to solve intelligent computational tasks. We also discuss the remarkable result of a proof-of-principle experimental setup of the scheme.

shrimali@curaj.ac.in

An efficient and biologically inspired alternative to backpropagation for learning in a neural network


Click Here For The Abstract


Artificial neural networks, inspired by biological neurons, have been shown to be incredibly successful in a variety of tasks. The conventional and widely accepted mechanism through which neural networks learn is backpropagation - where gradient descent is used to minimize the loss, and the chain rule is employed to compute gradients across all layers. However, backpropagation is simply not biologically plausible, and often computationally tedious. Here, inspired by what we know about learning in the brain, I introduce a novel biologically plausible learning rule which not only changes the way learning occurs, making it significantly faster and more efficient than backpropagation, but also leads to the emergence of several useful characteristics during learning. In particular, I present a mechanism for learning through local learning combined with gating. Local learning by itself is unable to learn non-linear functions, however this lost expressive power can be regained through an input-dependent gating mechanism.

I show that this learning mechanism successfully predicts chaotic time series (which are, naturally, nonlinear), and, unlike in conventional neural networks, the learned weight curves are smooth in this network, indicating a more intuitive, and perhaps more interpretable, learning mechanism. Additionally, when trained to learn a sequence of tasks, these networks are significantly better at remembering 'old' tasks than training using networks trained with backprop. Lastly, I discuss the biological plausibility of this learning rule and discuss preliminary experimental support for it. The success of this radically different learning mechanism that uses a very simple local learning, and which can be adapted to various types of neural networks including graph neural networks, introduces several open questions in learning theory and explainable AI, including designing neural networks personalized to the graph or task being learned.

sanju33@gmail.com