Speakers

Matt Dowling

Stony Brook University

Online Variational Learning for Exponential Family Dynamical Systems

Latent variable models have become instrumental in computational neuroscience for reasoning about neural computation. This has fostered the development of powerful offline algorithms for extracting latent neural trajectories from neural recordings. However, despite the potential of real-time alternatives to give immediate feedback to experimentalists, and enhance experimental design, they have received markedly less attention. We introduce the exponential family variational Kalman filter (eVKF), an online recursive Bayesian method aimed at inferring latent trajectories while simultaneously learning the dynamical system generating them. eVKF works for arbitrary likelihoods and utilizes the constant base measure exponential family to model the latent state stochasticity. We demonstrate the real-time capabilities of eVKF, and notably, we show that it can decode hand position comparably well to GPFA, even though GPFA is offline.

Bio

Matt is interested in probabilistic and statistical methods for uncovering latent structure from high-dimensional spatiotemporal neural time series. His research focuses on Bayesian methods applied to nonlinear, and non-Gaussian state-space models of neural activity, with a strong focus on approximate Bayesian inference.

Scott Linderman

Stanford University

Simple state space layers for modeling neural population dynamics

A central goal of statistical neuroscience is to develop accurate and interpretable models of neural population dynamics. Simple models like linear dynamical systems offer interpretability, but their accuracy is fundamentally limited by their linear assumptions. Recurrent neural networks and switching linear dynamical systems allow nonlinear dynamics, but they too have drawbacks.

Empirically, RNNs struggle to capture long-timescale dependencies, and fitting SLDS requires solving a hard, combinatorial optimization problem. Once these models are fit, it remains a challenge to reverse engineer their dynamics. I will show how to achieve state-of-the-art performance on time series prediction tasks using simple linear dynamical systems. The trick is to build a stack of LDSs and connect them via standard nonlinear activations. Thus, the model has linear dynamics in time, but nonlinear connections in depth.  We call this composition of simple state space layers "S5," and it currently tops the leaderboard on a variety of machine learning benchmarks, including the Neural Latents Benchmark. Early work suggests that S5 offers the best of both worlds: it retains the desirable features of linear models without sacrificing accuracy.

Bio

Scott Linderman is an Assistant Professor at Stanford University in the Statistics Department and the Wu Tsai Neurosciences Institute.  His research focuses on statistical models and machine learning methods for deciphering neural computation. His lab develops novel methods for state space models, deep generative models, point processes, and approximate Bayesian inference, and they work closely with experimental colleagues to apply these techniques to large-scale neural and behavioral recordings. Previously, Scott was a postdoctoral fellow with Liam Paninski and David Blei at Columbia University and a graduate student at Harvard University with Ryan Adams. His work has been recognized with a Savage Award from the International Society for Bayesian Analysis, an AISTATS Best Paper Award, and a Sloan Fellowship.

Adi Nair

California Institute of Technology

Latent dynamical models discover state dependent line attractor-like representations in the hypothalamus during social behavior

The hypothalamus is a crucial center in the brain that regulates innate drives such as fear, mating, and aggression. Various cell types in nuclei such as the ventromedial hypothalamus (VMHvl) have been shown by optogenetic perturbations to promote aggression in male mice and sexual behavior in female mice. Yet, neural recordings in these same neurons fail to identify individual neurons tuned to these behaviors. In this talk, I will show how the analysis of latent dynamical models fit to neural data can reveal an approximate line attractor in VMHvl neurons that encodes an aggressive state in male mice. Furthermore, these models uncover a line attractor-like representation of sexual arousal in the female VMHvl that is reversibly regulated by hormonal state during the estrus cycle. These results illustrate the power of latent dynamical models to bridge the gap between neural perturbation and representation and identify dynamical motifs that are reused for the computation of social behavior-related state variables in the hypothalamus.

Bio

Adi is a graduate student in computation and neural systems in David Anderson’s lab at Caltech, collaborating closely with Ann Kennedy at Northwestern and Scott Linderman at Stanford. He combines experimental approaches and theoretical tools to dissect neural mechanisms of social behavior, with an ultimate goal of understanding neuropsychiatric disorders through the lens of dynamical systems.

Chethan Pandarinath

Emory University & Georgia Institute of Technology

Latent variable models - accelerating progress and powering brain-computer interfaces

Latent variable models (LVMs) are generative models of neural population activity that provide a promising approach to help analyze and interpret high-dimensional neural data, and potentially also improve the performance of brain-computer interfaces to help people with paralysis. First, I will discuss the Neural Latents Benchmark [1], our recent large-scale collaborative effort to accelerate progress by establishing standards for evaluating LVMs. By releasing multiple reference datasets and associated metrics for evaluating different aspects of model performance, this effort is facilitating comparisons between new and prior work and providing a platform for tracking progress in the field. Second, I will discuss our recent effort to improve the robustness and stability of intracortical brain-computer interfaces (iBCIs), termed Nonlinear Manifold Alignment with Dynamics (NoMAD, [2]). NoMAD is an unsupervised approach to stabilize iBCI decoding using LVMs of neural population dynamics. Together these works demonstrate paths forward for accelerating progress in the field and improving iBCI robustness.


Bio

Dr. Pandarinath is an assistant professor in the Coulter Department of Biomedical Engineering at Emory University and Georgia Tech and the Department of Neurosurgery at Emory, where he directs the Systems Neural Engineering Lab. His group’s research applies machine learning and AI toward studying the nervous system and designing assistive devices for people with neurological disorders or injuries. He is a 2019 Sloan Fellow and K12 Scholar in the NIH-NICHD Rehabilitation Engineering Career Development Program. He is also a recipient of the 2021 NIH Director’s New Innovator Award.

Cristina Savin

New York University

Probabilistic manifold alignment across animals

Identifying the common structure of neural dynamics across subjects offers new means for extracting unifying principles of brain computation and is of practical relevance for brain machine interface applications. We present a probabilistic approach for aligning stimulus-evoked responses from multiple animals in common low dimensional manifolds. Using hierarchical inference in our graphical model we could derive a probabilistic decoder that can read out stimulus identity from a novel test animal with minimal amounts of subject specific training. When applied to recordings from the mouse olfactory bulb, our approach reveals low-dimensional odor specific population dynamics that have a consistent geometry across animals. 

Bio

After a PhD at Goethe University in Frankfurt, studying the role of different forms of plasticity in unsupervised learning,  Cristina worked as postdoctoral researcher at Cambridge U. developing normative models of memory. This was followed by a short stint at ENS in Paris, modeling probabilistic computation in spiking neurons, and an independent research fellowship at IST Austria, building statistical tools for quantifying learning in multiunit recordings. Since 2017 she is an Assistant Professor in Neural Science and Data Science at NYU. Her work focuses on identifying principles of brain computation, in particular adaptive behavior, by combining theoretical modeling and neural data analysis.

Iris Stone

Princeton University

Opponent control of behavior by dorsomedial striatal pathways depends on task demands and internal state

A classic view of the striatum holds that activity in direct and indirect pathways oppositely modulates motor output. Whether this involves direct control of movement, or reflects a cognitive process underlying movement, remains unresolved. Here we find that strong, opponent control of behavior by the two pathways of the dorsomedial striatum depends on the cognitive requirements of a task. Furthermore, a latent state model (a hidden Markov model with generalized linear model observations) reveals that—even within a single task—the contribution of the two pathways to behavior is state dependent. Specifically, the two pathways have large contributions in one of two states associated with a strategy of evidence accumulation, compared to a state associated with a strategy of repeating previous choices. Thus, both the demands imposed by a task, as well as the internal state of mice when performing a task, determine whether dorsomedial striatum pathways provide strong and opponent control of behavior.

Bio

Iris is a 5th-year Ph.D. student at the Princeton Neuroscience Institute working with Jonathan Pillow and Ilana Witten. Broadly speaking, her interests include using statistical modeling and machine learning to understand both the neural circuitry and behavior that support complex higher-order processes like decision-making and social interactions. Her current work includes using latent-state models to identify the discrete structures underlying these cognitive processes. Prior to Princeton, Iris earned a B.S. in Physics from George Mason University, where she studied the use of organic and nanomaterials for applications in biomedicine and neuroscience.

Anqi Wu

Georgia Institute of Technology

Addressing identification issues in nonlinear neural latent models

Data-driven statistical models have gained tremendous success in neural latent discovery. A critical property of these latent models is identifiability, implying that the discovered model parameters are the same as the ground truth that generates the neural data. It is critical to ensure the identifiability of a neural latent model since neuroscientists heavily rely on its findings to discover scientific underpinnings. In this talk, I will introduce two works that address the identifiability issue along two lines of neural latent models, i.e., VAE-based and GP-based models. In the first work, we develop a novel identifiable variational autoencoder for multi-neuron spike trains. In the second work, we develop a fast and stable inference approach for Gaussian process latent variable model that achieves identifiable neural latent. In sum, we hope to push forward the identifiability study, which is critical for the development of principled data-driven statistical models for rigorous neuroscience study.

Bio

Anqi Wu is an Assistant Professor at the School of Computational Science and Engineering (CSE), Georgia Institute of Technology. She was a Postdoctoral Research Fellow at the Center for Theoretical Neuroscience, the Zuckerman Mind Brain Behavior Institute, Columbia University. She received her Ph.D. degree in Computational and Quantitative Neuroscience and a graduate certificate in Statistics and Machine Learning from Princeton University. Anqi was selected for the 2018 MIT Rising Star in EECS, 2022 DARPA Riser, and 2023 Sloan Fellow. 

Her research interest is to develop scientifically-motivated Bayesian statistical models to characterize structure in neural data and behavior data in the interdisciplinary field of machine learning and computational neuroscience. She has a general interest in building data-driven models to promote both animal and human studies in the system and cognitive neuroscience.