Abstracts

Multilayer networks: structure and function

Ginestra Bianconi

School of Mathematical Sciences, Queen Mary University of London and Alan Turing Institute, London, UK

ABSTRACT: Multilayer networks [1] are emerging as a novel and powerful way to describe complex systems whose elements are related to each other by multiple types of interactions. Therefore multilayer networks are the underlying architecture of the complex systems formed by sereval interacting networks, ranging from the brain to the interactome, encoding for all the biological interactions in the cell.

Uncovering the interplay between multilayer network structure and function is a big theoretical challenge with a vast realm of applications. On the other side the urgency of understanding real-world multilayer network problems requires novel theoretical approaches. In this talk we will show how the statistical mechanics theory beyond multilayer networks reveals the information encoded in these structures and its effect on multilayer network robustness.

[1] Bianconi, G., 2018. Multilayer networks: structure and function. Oxford University Press.

How randomness and collective dynamics define a stem cell

Bernat Corominas Murtra

Institute of Biology, University of Graz, Austria

ABSTRACT: What defines the number and dynamics of the stem cells that generate and renew biological tissues? Several molecular markers have been described to predict stem cell potential with great success, e.g., in tissues like the blood. However, it is also true that the biochemical identification of stem cells revealed problematic in some other tissues, like the one forming the mammary gland in developmental stages. In this talk I will propose a complementary approach that mathematically describes “stemness” as an emergent property arising only from dynamical and geometrical considerations. With this approach, one can predict the robust emergence of a region made of functional stem cells, as well as give predictions on lineage-survival probability, among other empirically testable observables. The theory has been developed in constant feedback with new experimental results coming from the exploration of intestine crypt renewal dynamics and mammary gland and kidney development. The presented approach does not neglect the key role of biomarkers: Instead, it points towards the existence of another layer of complexity that considers an ecological-like organization of tissues. In this context, collective phenomena, stochastic dynamics and geometry play an active role in determining the emergence of different cell functionalities.


Bridging cognitive control and network theory

Giovanni Petri

Fondazione ISI, Torino

ABSTRACT: The ability to learn new tasks and generalize to others is a remarkable characteristic of both human brains and recent artificial intelligence systems. The ability to perform multiple tasks simultaneously is also a key characteristic of parallel architectures, as is evident in the human brain and exploited in traditional parallel architectures. Here we show that these two characteristics reflect a fundamental tradeoff between interactive parallelism, which supports learning and generalization, and independent parallelism, which supports processing efficiency through concurrent multitasking. We find that even modest reliance on shared representations, which support learning and generalization, strongly constrains the number of parallel tasks. This has profound consequences for understanding the human brain’s mix of sequential and parallel capabilities, as well as for the development of artificial intelligence systems that can optimally manage the tradeoff between learning and processing efficiency. In conclusion, we discuss recent results on how cognitive processing costs emerge as a result of both control and learning under multitasking pressure.

Sloppy models, differential geometry, and why science works

James P. Sethna

Cornell University, USA

ABSTRACT: Models of systems biology, climate change, ecology, complex instruments, and macroeconomics have parameters that are hard or impossible to measure directly. If we fit these unknown parameters, fiddling with them until they agree with past experiments, how much can we trust their predictions? We have found that predictions can be made despite huge uncertainties in the parameters – many parameter combinations are mostly unimportant to the collective behavior. We will use ideas and methods from differential geometry and approximation theory to explain sloppiness as a ‘hyperribbon’ structure of the manifold of possible model predictions. We show that physics theories are also sloppy – that sloppiness may be the underlying reason why the world is comprehensible. We will present new methods for visualizing this model manifold for probabilistic systems – such as the space of possible universes as measured by the cosmic microwave background radiation.

with: Katherine Quinn, Mark Transtrum, Han Kheng Teoh, Ben Machta, Colin Clement, Archishman Raju, Heather Wilber, Ricky Chachra, Ryan Gutenkunst, Joshua J. Waterfall, Fergal P. Casey, Kevin S. Brown, Christopher R. Myers



Efficient estimation of neural tuning during naturalistic behavior

Edoardo Balzani

New York University, USA

ABSTRACT: Systems neuroscience is gradually shifting from studying brain function via traditional tasks with well-controlled stimuli, toward understanding activity during naturalistic behavior. However, with the increased complexity of task-relevant features, standard analysis such as tuning function estimation becomes challenging. In this project, we propose an estimation method based on the Poisson Generalized Additive Model (PGAM) framework, which is particularly suited to handle high-dimensional correlated task-relevant input variables. The model builds a non-linear map between a large set of task variables (sensory inputs, behavioral outputs, task events, LFP instantaneous phase, activity of other neurons…), and the spike-counts of a recorded neurons. We developed a method for efficiently estimate model parameters that optimizes a cross-validation score and we derived approximate confidence intervals over model parameters. This allowed us to robustly identify a minimal set of task features that each neuron is responsive to, circumventing computationally demanding model comparison. We showed that the model outperforms traditional GLMs for what it concerns fitting quality and computing time. We applied our method to simultaneous neural recordings from three brain areas (medial superior temporal MSTd, area 7a, and dorsolateral prefrontal cortex, dlPFC) in monkeys performing a virtual reality spatial navigation task. Using this method, were able to reveal area-specific mixed selectivity, and preferential coupling between neurons with similar tuning. While we identified LFP-to-spike synchronization between dlPFC and area 7a in line with the brain anatomy, we found that response statistics in dlPFC resembles more that of the sensory area MSTd. This surprising result is strengthened by the unit-to-unit coupling which predicts a higher probability of directional coupling from MSTd to dlPFC.