A fundamental challenge for brain modelers is exploring parameters that are not well-constrained due to the limitations of experimental techniques. This could be prohibitively computationally expensive due to the curse of dimensionality (CoD). Yet, extracting biological insights from this complex, high-dimensional parameter landscape is crucial for experimentalists. As a preliminary trial to address this question, we used a similar CG method to study the "background" firing rates of V1 (when it's not processing any visual information). We presented a thickened codimension-1 "viable" parameter manifold, on which the model consistently resembles spontaneous cortical firing rates measured from experiments.
How does the cortical architecture give rise to cognitive functions? A lot of effort has been spent on how different parts of the brain connect at a large scale (see, e.g., the Human Connectome Project). However, due to the difficulty in experimental measurements, people know much less about detailed, local connectivities within a single cortical area, such as projections between different layers. Then how does a single part of cortex handle multiple cognitive functions at the same time? By examining how different "blocks" of neurons are coupled, our modeling approach offers a way to explore these detailed connections and their role in cognitive functions.
With the advantages of our CG model in exploring parameters and detailed cortical architecture, our goal in the next 3-5 years is to build a comprehensive, multiple-layer model of primate V1 that captures all its major visual tasks and the dynamic behavior of its various layers. This ambitious project will require a deep dive into the interaction kernels. See below in Question 3.
Generally, how to design an appropriate interaction kernel feasibly for different cortical areas? This question requires a solid grasp of
How activities of local circuits contribute to the coarser-level dynamics of the whole modeled area; and
How to efficiently predict these statistics based on parameters, architecture, and inputs of local circuits. We could tackle this by either
a. A novel mathematical theory for spiking network dynamics. This theory needs to work for finite numbers of neurons and capture transient synchrony across neurons (the latter is necessary for neural oscillations; see more in Topic 2), or
b. An efficient way of simulating local circuits repetitively.
The current setup is based on method b -- we precompute a library of local circuit responses. However, when we consider a multiple-layer model, the computation cost will grow exponentially. This is because a lot more types of inputs from different layers and more parameters are involved. An alternative to method b is using modern machine learning methods, such as deep networks, to "learn" the interaction kernels from a small number of local circuit simulations.
Analyzing the iteration Φ which governs the dynamics of the CG model. Φ is determined by the interaction kernel and the couplings between different blocks, and maps one dynamic state of the cortical area to another. The following mathematical properties of Φ are of special interest:
the invariant set of Φ, standing for the steady-state activities of the cortical area under stationary stimulus,
the spectrum of Φ, suggesting how orbits in the state space converge towards the invariant set. The convergence corresponds to the transient dynamics traveling between different invariant sets, while the sensory stimuli switches from one to another.
The convergence of Φ is natural if we had sharp timescale separation between the finer level dynamics (local circuit) and the coarser level dynamics (across different "blocks"), as is in most physical systems. However, there is no sharp time-scale separation in the cortex. The spikes from a neuron would be projected to neurons nearby and longer-range targets on the same time scale. This is a fundamental challenge for our analysis.