JST: Japan standard time
JST 10:30 - 10:40
Takuya Isomura/ Ken Nakae/ Hideaki Shimazaki
JST 10:40 - 11:30
Toshitake Asabuki (RIKEN ECL Unit Leader)
Lunch Break (90 min)
JST 13:00 - 14:10
Brent Doiron (The University of Chicago)
Break (10 min)
JST 14:20 - 15:10
Akihiro Funamizu (Institute for Quantitative Biosciences, University of Tokyo)
JST 15:10 - 16:00
Jun-nosuke Teramae (Kyoto University)
Break (10 min)
JST 16:10 - 17:00
Hideaki Shimazaki (Kyoto University)
JST 17:00 - 17:50
Ken Nakae (ExCELLS, NINS)
JST 17:50 - 15:55
Takuya Isomura/ Ken Nakae/ Hideaki Shimazaki
Toshitake Asabuki
While spontaneous activity in the brain is often regarded as simple background noise, recent works have hypothesized that spontaneous activity instead reflects the brain's learned internal model representing the statistical structure of the environment. Several computational studies have proposed synaptic plasticity rules to generate structured spontaneous activity, but the mechanism of learning and embedding such statistical structure in spontaneous activity is still unclear. Using a computational model, we investigate novel synaptic plasticity rules that learn structured spontaneous activity obeying appropriate probabilistic dynamics. The proposed synaptic plasticity rule for excitatory synapses seeks to minimize the discrepancy between stimulus-evoked and internally predicted activity, while inhibitory plasticity maintains the excitatory-inhibitory balance. We show that this learning paradigm generates stimulus-specific cell assemblies that encode various activation statistics, including the activation rates of the assemblies and the transition statistics of the model's evoked dynamics. Furthermore, we also demonstrate that simulations of our model can replicate recent experimental results on the behavioral biases of monkeys making perceptual decisions and the spontaneous activity of songbirds, suggesting that the proposed plasticity rule may underlie the mechanism by which animals learn internal models of the environment. Our results shed light on the learning mechanism of the brain's internal model, which is a crucial step towards a better understanding of the role of spontaneous activity as an internal generative model of stochastic processes in complex environments.
Brent Doiron
Neuronal assemblies are strongly interconnected groups of neurons that coordinate their activity to perform specific functions, and as such are believed to be fundamental units of neural computation. The formation and stability of these assemblies are influenced by synaptic plasticity rules, particularly the fine timescale learning induced by spike-timing-dependent plasticity (STDP). In this presentation we will investigate the role of STDP temporal asymmetry (or causality) in the formation and stability of neuronal assemblies in networks of spiking neuron models. This broad topic is studied in two separate, but related, projects. First, we will consider how the degree of causality in the STDP rule affects how assembly networks can tolerate overlap in assembly membership (neurons that are part of multiple assemblies). We show that networks with causal STDP can allow significant assembly overlap, while acausal STDP can cause two distinct assemblies to fuse into one assembly, erasing any learned memories. Second, we consider how STDP of inhibitory neurons contributes to the formation of excitatory assemblies. We show experimental data that Somatostatin (SOM) neurons have a causal STDP rule, while Parvalbumin (PV) neurons show an acausal STDP rule. In our model framework we show that SOM plastic inhibition is ideal to form lateral inhibition between assemblies, greatly reducing the energy required to form excitatory assemblies, while PV neurons perform homeostatic regulation of overall network activity. In sum, our work provides a step towards understanding how the temporal causality of STDP rules shapes the formation and stability of neuronal assemblies, offering insights into the principles underlying neural connectivity and memory representation.
Akihiro Funamizu
Our lab is interested in how animals integrate the sensory inputs and the prediction of sensory inputs or outcomes to optimize behavior. The signal detection theory based on Bayesian inference provides the optimal way to integrate the sensory and prediction. Based on the theory, we recently updated a tone-frequency discrimination task in head-fixed mice in previous studies by introducing a long or short sound, and biasing the amount of reward in each option. The tone durations and reward amounts affected the stimulus sensitivity and choice bias of mice, respectively. During the task, we performed a brain wide electrophysiology with Neuropixels 1.0 from the medial prefrontal cortex (mPFC), the secondary motor cortex (M2), and the auditory cortex (AC). We found that the choices and sounds were mainly locally represented in the M2 and AC, respectively, while the expected reward of each option (i.e., prior value) was globally represented in all the three recorded areas. These results propose a local and global representation of Bayesian inference in the cerebral cortex.
In the latter half of my talk, I plan to talk about our recent results in modeling mice choice behavior with a recurrent neural network (RNN).
Jun-nosuke Teramae
Stimulus responses of cortical neurons exhibit the critical power law whose covariance eigenspectrum follows the power law with the exponent just at the border of differentiability of the neural manifold. This criticality is conjectured to balance the expressivity and robustness of neural codes because non-differential fractal manifold spoils coding reliability. However, contrary to the conjecture, here we prove that the neural coding is not degraded even on the non-differentiable fractal manifold, where the coding is extremely sensitive to perturbations. Rather, we show that the trade-off between energetic cost and information always makes this critical power law the optimal neural coding. Revealing the non-trivial nature of high-dimensional coding, the theory developed here contributes to a deeper understanding of criticality and power laws in biological and artificial neural networks.
Hideaki Shimazaki
Higher-order interactions (HOIs) are ubiquitously observed in neural systems. In this talk, we demonstrate that the sparse population activity of neurons accounts for a significant portion of these HOIs. Our analysis, using noisy neuron models under balanced input conditions, reveals that local excitatory connections, rather than common inhibition, lead to the observed sparse population activity. We then introduce a model of HOIs in population activities that can produce a sparse, widespread spike-count histogram in large networks. This recurrent network, which realizes the distribution, exhibits signatures of threshold nonlinearity with supralinear activation, similar to the modern Hopfield networks. Finally, we present a framework that unifies concepts such as divergence/entropy, population activity, and the nonlinearity of individual neurons. Specifically, we introduce recurrent networks under the maximum Rényi entropy principle and show that HOIs in this network induce explosive phase transitions, facilitate memory retrieval, and enhance memory capacity. This framework enables a theoretical investigation of networks with HOIs.
Ken Nakae
The dynamic activity of the brain is influenced by both global structural networks and local excitation-inhibition (E-I) balances. However, the precise relationship between these factors and brain dynamics remains unclear, particularly in non-human primates. To address this, we employed connectome-based modeling to estimate E-I balance from diffusion MRI (dMRI) networks and functional MRI (fMRI) brain activity in marmosets during awake and anesthetized states. Our results demonstrate that the simulation of brain activity can be achieved using structural connectivity, with a higher similarity to empirical data in the awake state compared to the anesthetized state. Moreover, we found that the similarity between the empirical and simulated data correlated with the Lyapunov exponent, a measure of dynamical system complexity. Interestingly, during the awake state, the default mode network (DMN) exhibited a stronger correlation with overall cortical activity than other subnetworks. These findings suggest that in awake marmosets, there is a well-balanced relationship between structural global coupling and E-I balance, particularly in the DMN, which contributes to the complexity of brain dynamics. Our study highlights the importance of considering both global and local factors in understanding brain function and provides a framework for future investigations of brain dynamics in non-human primates and their potential translation to human studies.