Research

Research Overview

The B-B-C Lab (Brain, Behavior & Computation Lab, PI: Xue-Xin Wei) studies how the brain solves problems efficiently. in particular, we are interested in understanding how the brain exploits the statistical regularities in the environment and the task structures to support adaptive and intelligent behavior. To tackle this fundamental question, we take a combination of bottom-up (i.e., analyzing experimental data, and constructing constrained mechanistic models) and top-down approach (i.e. normative modeling) to reveal the basic principles underlying neural information processing.

The lab works at the intersection of Computational/Theoretical Neuroscience, Statistics, and AI/deep learning. We leverage the computational power of deep learning to help understand computations in the brain.  One such example is that we have pioneered the use of optimization-based recurrent neural networks (RNNs) to understand the functional and structural properties of the spatial navigation systems in the brain (Cueva & Wei, ICLR, 2018; Cueva et al, ICLR, 2020).  We work closely with experimentalists to develop sophisticated statistical techniques and tools to facilitate the understanding of data (Ajabi et al, Nature, 2023; Zhu & Wei, Nature Communications, 2023). For example,  a recent method we developed, pi-VAE (Zhou & Wei, NeurIPS 2020), can extract low-dimensional, intrinsic neural manifold from high-dimensional data, while simultaneously modeling the dependence between the latent states and behavior. These statistical methods combined with mechanistic modeling provide a promising way to uncover computational mechanisms underlying neural computation.

Overall, our research formulates theoretical/computational models and tests predictions thereof rigorously against data by collaborating closely with experimentalists, forming theory-experimental loops.



Optimization-based recurrent neural network (RNN) models for understanding the neural basis of cognition

We have been developing a computational modeling approach to study cognitive processing by using the optimization of recurrent neural networks. Applications of this approach have revealed insights into the principles of information processing underlying spatial navigation (Cueva & Wei, ICLR, 2018; Cueva et al., ICLR, 2020).  One current focus is working memory. Another focus is the grid cells and the place cells in the navigation circuits. 



Developing novel neural data analysis methods

We develop sophisticated statistical models to analyze large-scale neural data, in particular electro-physiological recording and calcium imaging (Wei*, Zhou* et al., NBDT journal, 2020; Zhou & Wei, NeurIPS, 2020; Kim, Liu, Wei, AISTATS, 2023; Rong & Wei, Nature Communications, 2023). We are particularly interested in developing efficient and flexible methods that can capture the single-trial dynamics of neural states.



Efficient neural coding

We construct computational models/theories based on the efficient coding hypothesis [HB Barlow, 1961] that could account for various experimental observations in neurophysiology. This leads to insights into how the brain encodes information in both sensory and cognitive brain areas (Wei & Stocker, Neural Computation, 2016; Wei, Prentice, Balasubramanian, eLife, 2015; Wang*, Wei* et al., NeurIPS 2016). Current work investigates the role of geometry in determining the properties of the neural code.





Normative models of behavior for perception and working memory

We have developed an integrated framework for the perception that unifies two prominent hypotheses in neuroscience, namely efficient coding and Bayesian inference (Wei & Stocker, Nature Neuroscience, 2015; Wei & Stocker, PNAS, 2017). There are several ongoing projects on this topic, focusing on working memory (Wei & Woodford, 2023), and perception of low- or high-dimensional stimuli (Hahn & Wei, 2022).




Computational models of sensory adaptation

Ongoing work aiming to understand how the neural response properties in sensory cortex adapt to the structure of the stimulus inputs. ( e.g., Wei & Miller, VSS, 2019)