What I cannot create I do not understand: analyzing neural and behavioral data with generative models
Workshop Description
A central goal of systems neuroscience is to understand how high-dimensional neural activity relates to complex stimuli and behaviours. Recent advances in neural and behavioural recording techniques have enabled routine collection of large datasets to answer these questions. With access to such rich data, we are now able to describe activity at the level of neural populations rather than individual neurons, and we can look at the neural underpinnings of increasingly complex and naturalistic behaviours. Unfortunately, it can be challenging to extract interpretable structure from such high-dimensional, often noisy, data. Generative modelling is a powerful approach from probabilistic machine learning that can reveal this structure by learning the statistical properties of the recorded data, often under the assumption that the high-dimensional observations arise from some lower-dimensional ‘latent’ process. Moreover, constructing such generative models makes it possible to build prior knowledge about the data directly into the analysis pipeline, such as multi-region structure or temporal continuity. This makes it possible both to make more efficient use of the available data by building in appropriate inductive biases, and to make the models more interpretable by shaping them according to the known structure of the data. Given the wealth of advances in generative modelling for systems neuroscience in recent years, we think the time is ripe to review this progress and discuss both challenges and opportunities for the future
Organizers
Relevant work
Opponent control of behavior by dorsomedial striatal pathways depends on task demands and internal state (Bolkan, Stone et al, 2022)
A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation (Balzani et al, 2022)
Inferring single-trial neural population dynamics using sequential auto-encoders (Pandarinath et al, 2018)
An approximate line attractor in the hypothalamus encodes an aggressive state (Nair et al, 2023)
Gaussian process based nonlinear latent structure discovery in multivariate spike train data (Wu et al, 2017)
Disentangling the flow of signals between populations of neurons (Gocken et al, 2022)
Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans (Linderman et al, 2019)
Neural Latents Benchmark ’21: Evaluating latent variable models of neural population activity (Pei, Ye et al 2022)
Stabilizing brain-computer interfaces through alignment of latent dynamics (Karpowicz et al, 2022)