Modeling the human cortex is extremely challenging due to its structural and dynamic complexity. Biologically detailed models can incorporate many details of real cortical circuits but are computationally costly and difficult to scale up. This constrains the modeling to small patches of cortex, thus limiting the range of phenomena that can be studied. Alternatively, reduced and phenomenological models are much easier to build and run. But there is a trade-off: The more a model is simplified, the more difficult it is to compare directly to experimental data. To strike a balance between biological realism and computational efficiency, we aim for mathematically reduced models retaining many essential features of the real cortex at a small fraction of the cost.
In collaboration with Lai-Sang Young and Kevin K. Lin, we propose a multiscale modeling framework for cortex by combining coarse-graining (CG) and precomputed potential local responses. CG means that, to avoid the nightmare of computing & analyzing a large number of neurons in a neuronal network, one usually needs to "glue" multiple neurons into one block, and then study how one block responds to inputs from external sources & the rest of the system (check a comment on historical work on CG here**). Typically, a well-accepted approach of multiscale modeling is to alternate between one "fine" + one "coarse" scale level:
First, subdivide the cortical region of interest into blocks of local circuits, then coarse-grain each block as one point in a "neural space". Each block should consist of a bunch of nearby neurons (O(100), meeting the scale of cortical minicolumns) receiving similar inputs from surrounding and external sources.
Then, compute the responses of each block to its surrounding blocks & external signals ("fine" level).
Integrate outputs from the previous step(s) to update input signals for each block ("coarse" level).
The iteration between steps 2 and 3 simulates the dynamics of the reduced CG model. It's a mapping Φ(x) sending one state of the modeled cortical area (x consists of local responses of every block) to another state.
The burden of preserving biological realism then falls on the interaction kernel of local circuits, i.e., the input-output relation coupling the fine and coarse scales. The better the interaction kernel imitates the local dynamics in the real brain, the more "real" is the CG model. Its derivation requires careful consideration of the physical laws governing both scales and accurate information translation across scales. Unfortunately, a comprehensive mathematical theory that can systematically relate local circuits' different architecture, parameters, and stimuli to their dynamic outcomes (e.g., firing rates, temporal synchrony, etc.) is still lacking. A more detailed discussion can be found in the second topic.
As an initial step to validate our framework, we propose precomputing a library of all possible local responses, allowing us to replace the interaction kernel with direct lookups and interpolations from this library. This approach is grounded in the observation that, despite the brain's overall structural heterogeneity, the architecture of local networks within specific cortical layers and regions tends to be remarkably consistent. Consequently, each block can be viewed as ONE dynamical system driven by varying inputs. In the library, the local responses are computed as "local thermal equilibrium", i.e., the stationary distribution of local circuits while driven by statistically stationary stimulus.
Our recent paper tests the proposed approach on a biologically detailed model of primate primary visual cortex (V1). Our CG model can replicate the activities observed in the detailed model with significantly reduced computational resources (achieving a speedup of ~1600 times). Also, the CG model successfully captures critical features of V1 such as orientation selectivity. The computational advantages of our approach become even more pronounced when applied to larger cortical areas, given that the computation costs of biologically detailed models usually escalate superlinearly with network size N, whereas the precomputed library can be reused extensively in the CG model.
In summary, our framework offers a practical platform for systematically evaluating cortical models with different parameters and network architectures under various external stimuli, facilitating a more efficient exploration of cortical dynamics and functions.
The idea of coarse-graining is not new in neuroscience (see extensive literature starting from Wilson & Cowan, work on neural field models, Fokker-Planck descriptions, kinetic theories, refractory density methods, and so on). But with some exceptions, most previous work has focused on analyzing “universal” or generic phenomena. To obtain computational & analytical convenience, they usually need to assume simple analytical forms of the interaction kernels. In contrast, we aim for cortical models that are simple enough to be understood yet realistic enough to shed light on specific biological mechanisms — to be queried and to offer guidance for future experiments.