Neural systems process information by evolving in time according to a set of underlying biophysical laws. They operate over many different time scales, with all their function – and dysfunction – originating from the temporally coordinated activity of its components. In recent years, significant attention in systems neuroscience has been placed on understanding neural representations and computations as trajectories in a state space. This has been successfully used to describe the modulation of neural population activity that correlates with various aspects of the systems-level behavior. However, a deeper understanding of the laws that govern the evolution of trajectories in the neural state space, especially their topological and geometric properties is still lacking. Given high-density recordings and sophisticated perturbation techniques, as well as new developments in AI and scientific ML, the field is ready to gain a deeper insight into the dynamical foundations of neural computation.
This workshop aims to explore the idea that neural computation is implemented by lawful population dynamics that drive neural trajectories on low-dimensional manifolds. The focus will go beyond merely reconstructing these trajectories, and address the challenge of developing statistical machine learning tools that uncover the inherent dynamical structure in recorded neural activity. The focus of the workshop will be on the theoretical, formal and methodological foundations needed to move the field forward, covering topics such as:
Conceptual questions:
What constitutes ‘dynamical systems reconstruction’ and what properties of the neural system need to be captured for an effective description?
Can we infer these properties from experimental data given indirect measurements and access to a small subset of relevant variables?
How do the topological and geometrical properties of dynamical systems play a role in neural computation?
Technical questions:
What are the theoretical and statistical limits on the insights that can be gained from this approach?
How do we interpret the solutions obtained after training a model? Are the learnt solutions reproducible for different modeling choices and datasets?
What are appropriate choices for model architectures and training algorithms for reconstructing dynamical systems, what are the respective strengths and weaknesses?
How can we evaluate the efficacy of dynamical system reconstruction? Some existing approaches include generative/forecasting performance, invariant measures, topological/ geometrical agreement, and long-term behavior.