Broadly speaking, my main research interests concern statistical models of behavior and coding in neural populations: how populations of neurons encode sensory information and perform computations, what collective behaviors of neural activity underpin those computations, and how we can reliably interpret neural data to dissect and understand the computations that experimentally recorded circuits are contributing to. My approaches to these problems use methods from statistical physics, information theory, and machine learning.
More information about specific research projects follows below!
The brain transforms sensory inputs—like light, sound, or taste—into electrical signals that neurons can process. In computational neuroscience, this is viewed as encoding sensory signals into neural activity. We aim to identify principles that predict how neurons encode stimuli, then test these predictions by decoding stimuli from neural activity.
In the Brinkman lab we have taken a biologically constrained machine learning approach, training artificial neural networks (ANNs) of neurons that fire spikes (which are typically absent from many ANNs) that make reciprocal connections to each other. These ANNs adapt their connectivity through exposure to sensory inputs, allowing them to “learn” how to encode stimulus features. We have used this approach to explore how sensory coding in the visual cortex deteriorates with age. In recent work we have also shown that by modifying developmental rules in this spiking ANN we could extend the ANN to model the auditory cortex. We found that the brain uses comparable rules to process visual images and to respond to sounds, suggesting a common strategy across different sensory cortices.
The "critical brain hypothesis" posits that neural circuitry may operate near a critical point, a boundary between two different phases of collective behavior, akin to the point at which liquid water freezes to become a solid as temperature is lowered. Proponents of the idea argue that operating at criticality may have several advantages, and while there is some evidence for criticality in certain cases, it is far from clear if actual neural circuits operate near critical points. Even if they do not, understanding the critical properties of network models can tell us much about emergent collective behavior of a neural population, even if it is not close to a critical point. This may be one way to understand the origin of low-dimensional behavior in neural systems: it may arise from collective modes of activity.
In order to better interpret neural data and identify reliable signatures of the presence or absence of neural criticality, Dr. Brinkman has been working on establishing the theoretical foundations of criticality in models of spiking neuron populations. His lab is doing so by adapting tools from the renormalization group to apply to settings with biophysical constraints, such as random synaptic connections and populations composed of separate excitatory and inhibitory cell types.
In recent work, Dr. Brinkman has shown that network models of in vitro (tissue in a dish) and in vivo (tissue in an in tact brain) networks exhibit differet kinds of phase transitions. In in vitro networks, in which neurons do not receive input from other brain regions, there is a transition between a silent state and a self-sustained active state as the strength of synaptic connections between neurons is increased. In in vivo networks, in which neurons are spontaneously active, there is a transition from uncoordinated activity to strongly coordinated low- or high-firing rate activity as the strength of synaptic connections between neurons is increased. See the preprint for details.
Advances in neural recording now allow us to monitor brain activity from hundreds to thousands of neurons at once, but this rapid growth in data has far outpaced our statistical tools for analyzing and interpreting data. Moreover, despite this volume of data that experimentalists can now record, the number of neurons that can be recorded in any brain area are only a fraction of the neurons participating in any given function the brain is performing. Managing this paradox of grappling with large volumes of data that paint incomplete pictures of neural activity comprises the third branch of the Brinkman lab's research.
One of the main problems we seek to understand is how to properly account for the effects of unrecorded neurons in our interpretations of neural data. In recent work the Brinkman lab studied how statistical inference procedures are skewed by these unobserved neurons, finding that inferred synaptic connections between neurons strongly resemble the spike-train cross-covariances, which do not offer any information about the directionality of neural responses, and hence their contribution to computations that evolve over time. However, this work also suggested that using transient responses to neurons to brief, strong stimulation may alleviate some of the limitations of recording from only a subset of neurons in a brain area, and is an active area of ongoing research in the Brinkman lab.