9:00 - 9:30
Simplicial Convolutional Recurrent Neural Networks for spike train decoding
The human brain uses a variety of neuronal networks to navigate through space. Head direction cells fire in groups based on the human’s head direction whereas grid cells work in decked layers which fire in different combinations to orient a person based on their surroundings. We want to utilize the structure of these neuron clusters to decode head direction and grid cell data. While graphs have long been used to represent neurons and their connectivity and work well for capturing pairwise relationships, they lack the ability to capture higher order connectivity. For example, if three neurons fire at once, typical graph edges do not provide a way to record this information. Simplicial complexes, a tool from algebraic topology, offer the structure needed to record these higher order relationships. For any k neurons firing, a k-simplex can be used to capture the neuronal relationships. Using simplicial complexes as inputs, we develop the simplicial convolutional recurrent neural network (SCRNN), a method of decoding head direction and grid cell data. The SCRNN framework considers as an input a simplicial complex, an approximation of the manifold of the neuronal data, and using simplicial convolutional layers outputs a vector that summarizes the topology of the data and feeds it to a recurrent neural network. We evaluate our method on head direction and grid cell data and compare its performance against three other networks. We show that the SCRNN can accurately decode neural data better than traditional neural networks and we theorize that it can be adapted to other computational neuroscience tasks.
9:30 - 10:00
Sleep in Biological and Artificial Networks
Artificial neural networks are known to exhibit a phenomenon called catastrophic forgetting, where their performance on previously learned tasks deteriorates when learning new tasks sequentially. In contrast, human and animal brains possess the remarkable ability of continual learning, enabling them to incorporate new information while preserving past memories. Empirical evidence indicates that sleep plays a crucial role in the consolidation of recent memories and safeguarding against catastrophic forgetting of previously acquired knowledge. I will begin by elucidating the key features and mechanisms of sleep in the brain. Subsequently, I will present our recent findings on the application of sleep-related concepts in computational models of brain networks and artificial intelligence.
10:00 - 10:30
Computation, Coherence, Coding, Chemistry, Communication in the Brain
The hypothesis of coexisting causes should come as no surprise in the complex multiscale dynamical systems of the brain. There are several sources of activity patterning - electrophysiological, dendritic, chemophysiological in spines, network, single-cell, etc. These are complementary, giving layered multiplexed multiplexing. Famously, the individual atoms of a gas can be encapsulated as particles for the purposes of understanding the gas laws and thermodynamics: there is no need to consider electron shells or intranuclear forces. By contrast, the scale of a particular ion channel type cannot be fully encapsulated as a phenomenological inductor for the purpose of understanding neuron dynamics.
Break 10:30 - 11:00
11:00 - 11:30
Recent advances in neuroimaging technology have significantly contributed to a better understanding of human central nervous system (CNS) organization, and the development and application of more efficient clinical programs. However, the limitations and tradeoffs inherent to the existing techniques, prevent them from providing large-scale imaging of neural activity with high spatiotemporal resolution, deep penetration, and specificity in awake and behaving participants. Recently, functional ultrasound imaging (fUSI) was introduced as a revolutionary technology that provides a unique combination of spatial coverage, unprecedented spatiotemporal resolution (~100 μm, up to ~10 ms) and compatibility with freely moving subjects, enabling a range of new pre-clinical and clinical applications. Based on power Doppler imaging, fUSI measures changes in cerebral blood volume (CBV) by detecting backscattered echoes from red blood cells moving within its field of view. While fUSI is a hemodynamic technique, its superior spatiotemporal performance (i.e., 100 μm and up to 10 ms) and sensitivity (~ 1 mm/s velocity of blood flow) offer substantially closer connection to the underlying neuronal signal than achievable with other hemodynamic methods such as fMRI. It is minimally invasive and requires a trepanation in large organisms to enable the penetration of the ultrasound waves, as the skull attenuates the acoustic wave. fUSI does not intrude the brain, but instead the ultrasound probe sits outside the dura mater.
By combining fUSI technology and machine learning techniques, we demonstrated for the first time that fUSI can predict the motor intention of non-human primates before they perform an actual movement – a prerequisite to brain machine interfaces (BMIs). Recently, we extended our studies in decoding micturition (i.e., the action of urination) from the spinal cord in human patients and animal models, opening a new avenue for developing spinal cord machine interface (SCMI) technologies to restore bladder control for patients with urinary incontinences (UI). We are also working on utilizing fUSI technology to study the pathophysiology of neurological (e.g., chronic pain) and psychiatric (e.g., schizophrenia) diseases in pre-clinical and clinical studies, and to guide therapeutic neuromodulation treatments – a technology that currently does not exist. Overall, fUSI is still on its infancy. Although our work establishes fUSI as a promising platform for neuroscientific investigation with potential for profound clinical impact, there are many challenges that we need to overcome – such as how to handle, process and decode in-real time the big dataset of ultrasonic data with efficient and interpretable machine learning techniques.
11:30 - 12:00
Applying simulation based inference in the Human Neocortical Neurosolver neural modeling software to study the mechanisms of electrophysiological biomarkers
Magneto- and electroencephalography (MEG/EEG) provide electrophysiological biomarkers of many healthy and pathological processes. Among the most common signals studied are low frequency oscillations, such as 15-29 Hz beta rhythms and event related potentials [1]. However, the multiscale cell and circuit mechanisms underlying these signals can be difficult to infer. The Human Neocortical Neurosolver (HNN; hnn.brown.edu) [2], is a biophysically detailed neural modeling software of thalamocortical dynamics that was designed to study the multiscale origin of these signals at the cell (e.g. spiking), and circuit level (e.g. local field potentials and current dipole). Detailed models like HNN enable a more direct comparison between model parameters and measurable biological properties, however, their use is challenging due to computationally expensive simulations, and highly complex relationships between model parameters and simulation outputs. Further, when using models to infer mechanisms of biomarkers in the form of electrophysiological time series waveforms, there is often a large space of parameter values that are capable of producing the same signal.
In this work we show how these challenges can be addressed with the technique of simulation based inference (SBI) by demonstrating its use on the HNN model. SBI is a deep learning based Bayesian inference framework that permits the estimation of model parameters capable of producing observed neural signals [3]. SBI is often used to estimate parameters that can produce a defined summary statistic of the signal of interest. One particular challenge when applying SBI to infer time series waveforms is the specific choice of summary statistics. Experimentally recorded time series waveforms often contain complex patterns of noise and unexplained activity that are not captured by biophysical models, requiring users to choose summary statistics that capture important features of the waveforms. By applying SBI to previously studied neural biomarkers in HNN, namely transient beta rhythms (i.e beta events) and event related potentials, we explore how different summary statistics perform for inferring the parameters underlying biomarker generation. We also discuss methodological considerations for applying SBI in large-scale neural models.
The combination of HNN and SBI opens the door to using large-scale detailed models for questions typically suited for simplified/abstract models of neural activity that are more mathematically tractable. Importantly, this approach offers vital insight in understanding how cell-level properties interact to produce scientifically and clinically relevant biomarkers.
References
[1] S. R. Jones, et al. “Quantitative Analysis and Biophysically Realistic Neural Modeling of the MEG Mu Rhythm: Rhythmogenesis and Modulation of Sensory-Evoked Responses,” J. Neurophysiol., vol. 102, no. 6, pp. 3554–3572, Oct. 2009
[2] S. A. Neymotin et al., “Human Neocortical Neurosolver (HNN), a new software tool for interpreting the cellular and network origin of human MEG/EEG data,” eLife, vol. 9, p. e51214, Jan. 2020
[3] K. Cranmer, et al., “The frontier of simulation-based inference,” Proc. Natl. Acad. Sci., vol. 117, no. 48, pp. 30055–30062, Dec. 2020
Acknowledgements
This work was supported by the following grants: R01AG076227, RF1MH130415, from the National Institute of Health (NIH); ANR-20-CHIA-0016 from Agence nationale de la recherche (ANR).
12:00 - 12:30
Claudia Clopath
De novo motor learning creates structure in neural activity space that shapes adaptation
Animals can quickly adapt learned movements in response to external perturbations. Motor adaptation is likely influenced by an animal’s existing movement repertoire, but the nature of this influence is unclear. Long-term learning causes lasting changes in neural connectivity which determine the activity patterns that can be produced. Here, we sought to understand how a neural population’s activity repertoire, acquired through long-term learning, affects short-term adaptation by modeling motor cortical neural population dynamics during de novo learning and subsequent adaptation using recurrent neural networks. We trained these networks on different motor repertoires comprising varying numbers of movements. Networks with multiple movements had more constrained and robust dynamics, which were associated with more defined neural ‘structure’—organization created by the neural population activity patterns corresponding to each movement. This structure facilitated adaptation, but only when small changes in motor output were required, and when the structure of the network inputs, the neural activity space, and the perturbation were congruent. These results highlight trade-offs in skill acquisition and demonstrate how prior experience and external cues during learning can shape the geometrical properties of neural population activity as well as subsequent adaptation.
Lunch 12:30 - 13:30
13:30 - 14:00
Directed structures in brain networks, Part I
A strong hypothesis in neuroscience is that many aspects of brain function are determined by the ‘’map of the brain’’ and that its computational power relies on its connectivity architecture. Impressive scientific and engineering advances in recent years generated a plethora of large brain networks of incredibly complex architectures. A crucial aspect of the architecture is its inherent directionality reflecting the direction of information flow.
Two of the stark differences between directed and undirected networks is the presence of reciprocal connections and cliques of neurons with different levels of directionality. It has been shown that both reciprocal connections and directed cliques (those that maximize directionality) are overrepresented motifs in neural networks and that these are formed selectively rather than randomly. This brings forward questions in mathematics and in computational neuroscience. In the first talk, we explore how these motifs interact with each other and what their function is. In the second one we delve deeper into how to build appropriate null-models that take directionality into account.
14:00 - 14:30
Directed structures in brain networks, Part II
A strong hypothesis in neuroscience is that many aspects of brain function are determined by the ‘’map of the brain’’ and that its computational power relies on its connectivity architecture. Impressive scientific and engineering advances in recent years generated a plethora of large brain networks of incredibly complex architectures. A crucial aspect of the architecture is its inherent directionality reflecting the direction of information flow.
Two of the stark differences between directed and undirected networks is the presence of reciprocal connections and cliques of neurons with different levels of directionality. It has been shown that both reciprocal connections and directed cliques (those that maximize directionality) are overrepresented motifs in neural networks and that these are formed selectively rather than randomly. This brings forward questions in mathematics and in computational neuroscience. In the first talk, we explore how these motifs interact with each other and what their function is. In the second one we delve deeper into how to build appropriate null-models that take directionality into account.
14:30 - 15:00
Characterizing the topology of neural ensemble activity
Neural ensembles are groups of neurons thought to perform a distinct computation in the brain. However, if we do not know the function of the neurons a priori, finding and understanding ensembles in mixed population recordings is difficult. Continuous attractor network models suggest the collective responses of neural ensembles are constrained to manifolds, smoothly traversing this space in alignment with the variable encoded by the neurons (Skaggs, 1995). Hence, the represented covariate defines the topology of the neural activity, and, conversely, the topology of the neural activity indicates the encoded variable (Curto, 2017). Thus, we wish to determine the latter, and there are two main schemes in doing so – either analyzing the correlation structure or the activity state space of the network. This amounts to studying either the rows (i.e., the spike trains or neural codes) or the columns (the population codes) of the activity matrix. By forming an n-simplex if n neurons are active in the same activity state(s) or a k-simplex if k states contain the same neuron(s), we obtain the correlation complex or the state space complex. Dowker duality states that the topology of these complexes is identical (Dowker, 1952). However, in these constructs, the single random firing of a neuron may change the topology, so we assign weights to each simplex given by the number of overlapping states or neurons and threshold by a minimum number of overlaps. Applying persistent homology to the corresponding filtration of complexes allows assessing all thresholds and detecting the dominant topological features. However, this may depend on which of the complexes are analyzed. While the correlation complex captures how often neurons were coactive and is robust to noisy activity, the state space complex contrasts which neurons were active and is thus more robust to noisy neurons. We study when these differ and propose a means of using the complementary strengths in analyzing neural data - finding neural ensembles and characterizing their function topologically. This approach is applied in idealized settings and in experimental recordings, revealing the topology of functional networks such as grid, head direction, place and orientation-tuned cells.
Break 15:00 - 15:30
15:30 - 16:00
Topology of activity in macaque V1 is closely related to bottom-up and top-down signals
High-dimensional brain activity is often organized into lower-dimensional neural manifolds with intricate topologies, which can represent a plethora of behavioral variables. However, neural manifolds in the visual cortex of primates remain understudied [1, 2].
Here, we study the neural manifolds of macaques (Macaca mulatta, N=3) in V1 during active vision and resting state, recorded with extracellular multi-electrode arrays with a total of 1024-electrodes (Utah array) [5]. In the active vision tasks, a sweeping bar was presented that moved across the screen in different directions. Due to the retinotopy of V1 this causes predictable population activity in V1, with a ring-like topology. A simple model of sweeping activity could reproduce this topology, showing that the geometry of V1 activity is highly predictable from the bottom-up visual input.
In the resting state, the macaques were seated in a dark room and thus received no visual input. Nevertheless, our analysis reveals two distinct neural manifolds in macaque V1, which are strongly correlated with eyes-open and eyes-closed conditions. The dimensionality of the eyes-open manifold is significantly higher, primarily due to lower noise correlations.
We hypothesize that cortico-cortical communication, estimated from LFP coherence and Granger causality, induces the changes observed in the resting state. We find that top-down signals from V4 to V1 are significantly stronger during the eyes-open periods, and that they primarily target the foveal region, in agreement with previous structural reports [6]. Finally, we show in a small balanced spiking neuron model that top-down signals can induce multiple neural manifolds, suggesting a causal link between our experimental observations.
Taken together, the data analysis and simulations suggest that V4-to-V1 signals actively modulate neural manifolds in the visual cortex of the macaque, potentially preparing the visual cortex for fast and efficient vision.
References
[1] Stringer et al. 2020. Nature 571, 361-365 (doi.org/10.1038/s41586-019-1346-5)
[2] Singh et al. 2008. Journal of Vision 8(8), 11 (doi.org/10.1167/8.8.11)
[3] Poort et al. 2012. Neuron 75 (1), 143-156 (doi.org/10.1016/j.neuron.2012.04.032)
[4] Naumann et al. 2022. eLife 11, 76096 (doi.org/10.7554/eLife.76096)
[5] Chen et al. 2022. Scientific Data 9 (1), 77 (doi.org/10.1038/s41597-022-01180-1)
[6] Wang et al. 2022. bioRxiv (doi.org/10.1101/2022.04.27.489651)
16:00 - 16:30
Identifying and annotating computational neuroscience work using natural language processing techniques
Every year, hundreds of papers are published that employ mathematical and computational neuroscience. The models vary greatly and are dispersed across the neuroscience literature, making it challenging for a modeler with a specific question to find and interpret relevant prior work. To address this need, ModelDB (modeldb.science) was founded over 25 years ago as a discovery tool for the field, a centralized place for sharing models with consistent metadata. Over time, comparisons between models and model analysis tools were added to provide better insights into the shared models. Despite various attempts to introduce standards for neuroscience model representation and for including biological detail in the models, identifying relevant metadata for models along with the models themselves remains laborious. ModelDB has grown by around 100 models a year through unsolicited code submissions, but this offers only a biased sample of the field’s work. To provide a more complete characterization of the field, we are exploring automated identification of computational neuroscience models and their metadata. SPECTER embeddings of title and abstract separate neuroscience works that use modeling from those that do not with high accuracy. GPT-4 allows identification of relevant metadata, with performance varying by category (e.g. cell types vs research topics) and source location (e.g. abstracts, which are almost always available, rarely have information on ion channels), although validation against ModelDB is complicated due to curation variation across the years. The cheaper GPT-3.5 appears too prone to hallucinations to be usable for this task. We compare performance of these approaches to prior rule-based strategies, and discuss plans to share identified modeling work on ModelDB to enable further automated data mining while preserving ModelDB’s other role as a place to find computational neuroscience model source code. It is our hope that such an expanded insight into the state of the field will help modelers avoid repetitive efforts and build more principled models.
16:30 - 17:00
Unveiling the Effects of Non-Invasive Brain Stimulation through Topological Data Analysis and Biophysical Network Modeling
Non-invasive brain stimulation (NIBS), particularly transcranial magnetic stimulation (TMS), has emerged as a promising technique for modulating brain activity and investigating its impact on cognitive processes. In this talk, we present a novel approach utilizing topological data analysis (TDA) based Mapper to unravel the intricate changes in spatiotemporal organization induced by NIBS in the brain. Moreover, we will delve into our ongoing research, which focuses on leveraging biophysical network modeling to gain deeper insights into the underlying mechanisms driving NIBS-induced brain dynamics, with the hope of ultimately developing targeted and effective stimulation protocols.