Selected Publications

The structure of hippocampal CA1 interactions optimizes spatial coding across experience

Aim: Dissect and study pairwise neural interactions.

Significance:

Local circuit interactions play a key role in neural computation and are dynamically shaped by experience. However, measuring and assessing their effects during behavior remains a challenge. Here we combine techniques from statistical physics and machine learning to develop new tools for determining the effects of local network interactions on neural population activity. This approach reveals highly structured local interactions between hippocampal neurons, which make the neural code more precise and easier to read out by downstream circuits, across different levels of experience. More generally, the novel combination of theory and data analysis in the framework of maximum entropy models enables traditional neural coding questions to be asked in naturalistic settings.

Paper: https://doi.org/10.1523/JNEUROSCI.0194-23.2023
Code: https://github.com/michnard/CA1_network_interactions


Nonlinear computations in spiking neural networks through multiplicative synapses

Aim: derive and implement nonlinear dynamics in networks of spiking neurons.

Abstract:

The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While nonlinear computations can be implemented successfully in spiking neural networks, this requires supervised training and the resulting connectivity can be hard to interpret. In contrast, the required connectivity for any computation in the form of a linear dynamical system can be directly derived and understood with the spike coding network (SCN) framework. These networks also have biologically realistic activity patterns and are highly robust to cell death. Here we extend the SCN framework to directly implement any polynomial dynamical system, without the need for training. This results in networks requiring a mix of synapse types (fast, slow, and multiplicative), which we term multiplicative spike coding networks (mSCNs). Using mSCNs, we demonstrate how to directly derive the required connectivity for several nonlinear dynamical systems. We also show how to carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work demonstrates a novel method for implementing nonlinear computations in spiking neural networks, while keeping the attractive features of standard SCNs (robustness, realistic activity patterns, and interpretable connectivity). Finally, we discuss the biological plausibility of our approach, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.

paper: https://doi.org/10.24072/pcjournal.69
code: https://github.com/michnard/mult_synapses

The entorhinal cognitive map is attracted to goals

Aim: Study the encoding of reward locations by hippocampal and entorhinal cortical neurons

Charlotte N Boccara*, Michele Nardin*, Federico Stella, Joseph O’Neill, Jozsef Csicsvari (*equal contributions)

Abstract:

Grid cells with their rigid hexagonal firing fields are thought to provide an invariant metric to the hippocampal cognitive map, yet environmental geometrical features have recently been shown to distort the grid structure. Given that the hippocampal role goes beyond space, we tested the influence of nonspatial information on the grid organization. We trained rats to daily learn three new reward locations on a cheeseboard maze while recording from the medial entorhinal cortex and the hippocampal CA1 region. Many grid fields moved toward goal location, leading to long-lasting deformations of the entorhinal map. Therefore, distortions in the grid structure contribute to goal representation during both learning and recall, which demonstrates that grid cells participate in mnemonic coding and do not merely provide a simple metric of space.

paper: https://doi.org/10.1126/science.aav4837

Rats learned 3 new reward ("goal") locations every day, while Charlotte simultaneously recorded from CA1 and MEC neurons.

Statistical modeling of single-cell firing in MEC and CA1 allowed us to quantify the true effect of learning on spatial representations.
Red circles = detected grid fields; blue triangles = reward locations