Networks dynamics
Networks are gaining considerable importance in the modeling of natural and artificial phenomena. They define in fact the natural playground for a large plethora of problems, that assume a heterogeneous support for the connections among constituents. In the brain, electric signals flows on neuronal networks. The crowded world of cells in general is segmented by microtubules, that yield an intricate cobweb of interlinked paths. Internet, and its multifaceted applications, heavily rely on the topology of cyber network. Human mobility patterns, with their follows up for transportation design and epidemic control, configure, at a plausible level of abstraction, as effective graphs, linking different spatial locations. Our general goal is to study reaction diffusion processes for systems hosted on network like structure in the aim of understanding and possibly controlling the emerging dynamics.
Population dynamics
Investigating the dynamical evolution of an ensemble made of microscopic entities in mutual interaction constitutes a rich and fascinating problem, of paramount importance and cross-disciplinary interest. The intimate discreteness of any individual based stochastic models results in finite size corrections to the ideal mean-field dynamics. Under specific conditions, such microscopic disturbances can amplify as follows a complex resonance mechanism and yield to organized spatio-temporal patterns. More specifically, the measured concentration which reflects the distribution of the interacting entities (e.g. chemical species, biomolecules) can oscillate regularly in time and/or display spatially patched profile, collective phenomena which testify on a surprising degree of macroscopic order, as mediated by the stochastic component of the dynamics. Our research is aimed at exploring these effects into details. This task is pursued with reference to specific models, of broad theoretical and applied relevance. Applications range from neurosciences to molecular biology.
Statistical mechanics of long-range systems
Long-range interactions are such that the two-body interaction potential decays at large distances with a power--law exponent which is smaller than the space dimension. The thermodynamics and dynamical properties of physical systems subject to long--range couplings were poorly understood until a few years ago, and their study essentially restricted to astrophysics (e.g. self-gravitating systems). Later, it was recognised that long-range systems display universal out-of-equilibrium features, for which conventional equilibrium statistical mechanics is inadequate. In particular, it has been shown that long range interacting systems generally exhibit a whole set of new qualitative properties and behaviours: ensemble inequivalence (negative specific heat, temperature jumps), long-time relaxation (quasi-stationary states), violations of ergodicity and disconnection of the energy surface, subtleties in the relation of the fluid (i.e. continuum) picture and the particle (granular) picture, new macroscopic quantum effects, etc.. While progress has been made in understanding such phenomena, an overall thermodynamic and statistical framework is, however, still lacking. Our work aims at contributing to the developing of such comprehensive picture.
Foundation of Machine Learning and AI applications
Machine learning (ML) refers to a broad field of study, with multifaceted applications of cross-disciplinary breadth. ML ultimately aims at developing computer algorithms that improve automatically through experience. Systems can indeed learn from data, so as to identify distinctive patterns and make consequent decisions, with minimal human intervention. In our group we apply machine learning methods to a wide range of problems (from biology to material science). We also seek at developing novel strategies for optimal learning so as to improve on current methods. We recently proposed a novel learning scheme which is anchored on reciprocal space. Following this newly proposed approach, one trains the eigenvalues and the eigenvectors of suitable transfer operators, instead of adjusting the weights in direct space. Interestingly, one can freeze the eigenvectors to reference entries and restrict the learning to the eigenvalues. In doing so one still obtains remarkable performance, in terms of classifications scores, while acting on a considerably smaller set of adjustable parameters.
Neuronal dynamics and inverse schemes
The computational power of a neuronal system stems from the peculiar non-linear features, as displayed by individual units, and their mutual interactions, as mediated by the network topology. Network topology is one out of many possible sources of heterogeneity in the brain. Neurons may in fact also exhibit different intrinsic dynamics, an additional ingredient which can significantly impact the functioning of the brain.
To shed light onto these issues, we considered a Leaky-Integrate and Fire (LIF) neuronal model, with short-term plasticity. The neurons are coupled to a directed network and display a degree of heterogeneity in the associated current,
which sets the degree of effective excitability. Assuming the above dynamical model and using available input data (time series of neurons activity, as e.g. calcium images resolved in time) we set to study an inverse problem. Specifically, the aim of the method is to recover the distribution of the (in-degree) connectivity, labelled k, which characterizes the embedding network, as well as the distribution of the assigned currents, denoted a. The first version of the method was aimed at recovering structural and dynamical information assuming the system solely made of excitatory neurons (tested on data of mice with an induced stroke). The analysis has then been extended to a setting where inhibitory neurons are also considered (tested on data of zebrafish). In this latter setting the problem is to isolate the signal which stems from the two simultaneously interacting populations of neurons.