A non-Hebbian code for episodic memory
We introduce a novel biologically plausible learning rule, relying only on presynaptic activity. This rule excels in tasks involving known world model trajectories, outperforming traditional Hebbian-based models such as STDP (Spike Timing Dependent Plasticity). The memory traces, termed “path vectors,” are expressive, decodable, and support diverse one-shot tasks and policy learning. Thus, non-Hebbian plasticity is sufficient for flexible memory and learning, encoding episodes and policies as paths through a world model.
From loss flatness to compressed representations in neural networks
We show that the emergence of compressed, low-dimensional neural representations is intrinsically linked to the flatness of minima in Stochastic Gradient Descent (SGD). This connection suggests that a flatter loss corresponds to a lower upper bound on the compression metrics of neural representations. Thus, the generalization capacity of deep neural networks is not only shaped by the loss landscape in parameter space but also structured by the representation manifold in feature space.
A scale-dependent measure of system dimensionality
We’ve devised a novel, theoretically grounded metric to evaluate the multiscale nature of manifolds, including neural manifolds and other datasets. This scale-dependent approach uncovers the effective number of degrees of freedom in complex systems, linking dimensionality to spatiotemporal scale. It identifies the appropriate dimension at various scales and correlates with established measures of dimensionality.
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion
Our research demonstrates how Recurrent Neural Networks (RNNs) dynamically learn representations that align with task demands, resulting in compressed dimensionality manifolds. Through simulations and mathematical analysis, we’ve found that gradient descent can guide RNNs to adjust the dimensionality of their representations to match task requirements during training, while also supporting generalization to unseen examples.
Nature Machine Intelligence (2022)
Metastable attractors explain the variable timing of stable behavioral action sequences
We demonstrate that the actions of animals can be predicted by decoding transitions between specific neural states in the secondary motor cortex of rats. We developed a model that emphasizes the importance of neural states in understanding the progression of neural activity and its control within neural circuits. This model, which uses Hidden Markov Models to identify states, and attractor networks to quantify their dynamics, shows that these states can predict an animal’s upcoming behavior.
Metastable attractors explain the variable timing of stable behavioral action sequences
This work shows how predictive learning enables the extraction of representational maps of explored world models. We’ve trained a Recurrent Neural Network (RNN) to predict observation sequences, resulting in low-dimensional, nonlinearly transformed representations of sensory inputs. We show with both simulations and mathematical arguments how these representations map the latent structure of our environment.
Strong and localized recurrence controls dimensionality of neural activity across brain areas
Our research demonstrates how the complexity of neural activity can be dynamically and internally regulated in neural networks. Through an in-depth analysis of high-density Neuropixels recordings across numerous mice, we establish that areas across the mouse cortex operate in a sensitive regime, allowing synaptic networks to play a significant role in controlling dimensionality. This control is expressed over time, with cortical activity transitioning among states of varying dimensionalities.
Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity
We show how local connectivity motifs in Recurrent Neural Networks (RNNs) influence the dimensionality of the manifold of activities. Dimensionality, a measure of coordinated network-wide activity, is crucial in neuroscience as it reveals the number of modes or degrees of freedom a network can independently explore. We find that the dimensionality of global activity patterns can be systematically regulated by local connectivity structures.
Dimensionality compression and expansion in Deep Neural Networks
We reveal that deep neural networks process information in two distinct phases: an initial phase of dimensionality expansion and a subsequent phase of dimensionality compression. This process is crucial for dealing with high-dimensional problems, such as those involving images, text, or movies. We apply advanced techniques for intrinsic dimensionality estimation, demonstrating that neural networks effectively learn to identify and extract task-relevant variables from low-dimensional manifolds.