Friday 13th

Strongly disordered quantum systems can undergo many-body localization (MBL): a form of strong non-ergodicity that occurs in thermodynamically large interacting systems, but is very different from the glass transition. Unlike the latter, MBL rigorously exists only in low dimensions. Nevertheless, strong crossovers from bad to good conductors survive also in high dimensions, where temperature is a main driver of the crossover.In this talk, I will discuss several interesting effects of thermal fluctuations on localization:Slow thermal fluctuations of disorder felt by quantum particles enhance transport, and entail a phase transition in the annealed conductance, within the insulating phase.As fluctuations always enhance conductance, one might think that at given parameters a system might both be self-consistently localized and non-fluctuating, or delocalized and fluctuating. However, I will argue that such a coexistence scenario is excluded.In 3d dimensions in the continuum, genuine MBL is impossible due to single particle mobility edges at high energy. However, thermal excitations lower the effective mobility edge in a very dramatic way, which leads to an interesting super-Arrhenius behavior of transport at low temperature in the badly conducting phase.
We introduce and study a toy model for anomalous transport and Griffiths effects in one dimensional quantum disordered isolated systems near the Many-Body Localization (MBL) transitions. The model is constituted by a collection of 1d tight-binding chains with on-site random energies, locally coupled to a weak GOE-like perturbation, which mimics the effect of the delocalizing interactions by providing a local broadening of the Poisson spectrum. The model is designed as a microscopic and analytically tractable realization of the effective coarse gained models introduced in the framework of the strong disordered renormalization group (RG) approach to MBL. Increasing the coupling with the GOE perturbation we find a delocalization transition from an insulating phase to a conducting one driven by the proliferation of quantum avalanches, which does not fit the standard paradigm of Anderson localization. In particular an intermediate Griffiths region emerges, where exponentially distributed insulating segments coexist with a few, rare resonances. Average correlations decay as stretched exponential and diverge with the length of the chain, while typical correlations decay exponentially fast, indicating that the conducting inclusions have a fractal structure and that the localization length is broadly distributed at the critical point. This behavior is consistent with a Kosterlitz-Thouless-like criticality of the transition. Transport and relaxation are dominated by rare resonances and rare strong insulating regions, and show anomalous behaviors strikingly similar to those observed in recent simulations and experiments in the bad metal delocalized phase preceding MBL. In particular, we find sub-diffusive transport and power-laws decay of the return probability at large times, with exponents that gradually change as one moves across the intermediate region. Concomitantly, the a.c. conductivity vanishes near zero frequency with an anomalous power-law.
Path integrals are a central tool to describe quantum or thermal fluctuations of particles or fields. Path integrals are the mirror image of our conventional Riemann integrals, with functions replacing the realnumbers one usually sums over. However, unlike conventional integrals, intheir usually setting, path integration suffers a serious drawback: ingeneral, one cannot make non-linear changes of variables without committing an error of some sort. Here we come up with cures for systemsdescribed by one degree of freedom. Our main result is a construction of path integration free of this longstanding problem, through a direct time-discretization procedure.Leticia F. Cugliandolo, Vivien Lecomte, Frédéric Van Wijland, arXiv:1806.09486, J. Phys. A Lett. (to appear)

Matteo Marsili

The J_{i,j}=±J Ising learning machine

Optimal Learning Machines (OLM) are systems that extract maximally informative representation from data. At a given resolution, they maximise the relevance, which is the entropy of their energy spectrum. In order to understand their peculiar properties, we study J_{i,j}=±J fully connected Ising models and contrast their properties with those of more familiar models in this class, such as ferromagnets (J_{i,j}=J). The main finding is that optimal Ising learning machines are characterised by inhomogeneous distributions of couplings and that their relevance increases as h_E log(n) with the number n of spins, with h_E>1. This contrasts with the behaviour of statistical physics models, where h_E=1/2 for off-critical systems and h_E=3/4 at the critical point. Yet, since the majority of couplings is ferromagnetic (antiferromagnetic ones being sub-extensive), the thermodynamic properties of models that have superior learning performance is indistinguishable form those of ferromagnets.
In this talk I will review basic concepts and models that contributed to our multiscale understanding of physical-chemical basis of evolutionary dynamics. First, I will briefly outline the statistical mechanical analysis, which uncovered the energy gap criterion - the necessary and sufficient conditions for a heteropolymer sequence to encode a foldable protein. I will then discuss the analogy between sequence selection for energy gaps and statistical mechanics of a class of generalized spin models, which resulted in discovery in 1993 canonical (Boltzmann distribution) of probability to find a foldable sequence. The statistical mechanical view of sequence selection developed in early 2000 enjoyed renaissance with the development of statistical methods to derive structural information about proteins from the analysis of variation in multiple sequence alignment. The analogy between spin statistics and protein evolution provides insights into principles of selection of protein structure including such aspects as designability and its structural determinants as well as concept of “selective temperature” which allows to quantify the strength of biophysical evolutionary pressure on protein families.Next, I will present recent efforts at modeling evolutionary dynamics that merges molecular mechanisms with population genetics. The novel multiscale models integrate the molecular effects of mutations on physical properties of proteins into physically intuitive yet detailed genotype-phenotype relationship (GPR) assumptions. I will present a range of models from simple analytical diffusion-based model on biophysical fitness landscapes to more sophisticated computational models of populations of model cells where genetic changes are mapped into molecular effects using biophysical modeling of proteins and ensuing fitness changes determine the fate of mutations in realistic population dynamics. Examples of insights derived from biophysics-based multiscale models include the scale-free character of Protein Universe, the fundamental limit on mutation rates in living organisms, physics of thermal adaptation, co-evolution of protein interactions and abundances in cytoplasm and related results, some of which I will present and discuss.

Daniel Fisher

Chaotic ecology, evolution, and fine scale diversity

The enormous diversity of species is a striking feature of life on Earth — yet still a major puzzle. This has been exacerbated by recent evidence showing that even within single microbial species, extensive diversity can coexist in the same location. Why doesn’t survival-of-the-fittest eliminate this? Do “micro-niches” have to be invoked as an explanation? Progress on simple models of such fine-scale diversity will be reviewed and challenges for theorists working on random systems raised.

Rémi Monasson

Optimal capacity-resolution trade-off in memories of multiple continuous attractors

Continuous attractors neural networks are an important concept in computational neuroscience to explain how sensory coordinates could be memorized by a noisy neural population. We study here the optimal storage of L continuous D-dimensional attractors in a single network of N interacting neurons. We show that the capacity, i.e. the maximal ratio L/N, decreases as |log(ε)|^−D, where ε is the error on the position encoded by the bump of neural activity along each attractor. Hence, recurrent neural nets are flexible memory devices capable of storing a large number of maps at high spatial resolution. These results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardner’s classical theory of the perceptron to the case of quenched spatially correlated patterns.