An organizing principle for spatial transformation across diverse rooms in the hippocampal formation
The remapping of spatial firing fields of cells in the hippocampal formation between rooms is a well-known and extensively studied phenomenon; however, the organizing principle for this remapping is unknown. While CA1 shows random remapping, regions such as the subiculum and medial entorhinal cortex (MEC) display a more organized transformation between rooms. Still, the structure of this transformation is unknown. To study this, we used high-density chronic Neuropixels recordings of the subiculum and MEC in freely moving mice and investigated the structure of the transformation at the population level. We implemented an advanced decoder to estimate the existence and characteristics of the remapping transformation. We deliberately designed the decoder as non-linear and time-dependent to capture rich and complex transformations. To our surprise, we discovered that the ensemble activity often underwent a simple, smooth, low-dimensional transformation captured by an affine transformation (i.e., rotation, scaling, and shear). The population code is thus flexibly adapted to a new context while retaining a stable spatial representation at the network level. This principle was reproduced across the subiculum and the MEC, as well as across spatial cell types such as border, head direction, and other spatially modulated cells. These results provide new insights into the computational principles of hippocampal formation remapping, suggesting that spatial cognition is subserved by adaptive ensemble codes governed by a simple affine coordinate transformation. Our findings establish population structure as a critical organizing principle for spatial memory, suggesting avenues for decoding spatial information in unvisited environments by embedding transformation rules into population-level decoders. We thus demonstrate for the first time a simple organizing principle for the representation of the transformation between diverse environments in the hippocampal formation.
A Geometric to Directional Time-Varying Graphs
Learning depends on the brain’s ability to generate organized patterns of activity that adapt behavior. To understand dynamic phenomena—learning, cognition, and disease progression—we need to study how neuronal connectivity evolves over time. Connectivity is commonly described at three complementary levels: structural, functional, and effective. Structural connectivity captures the anatomical wiring between neurons. Functional connectivity is typically inferred from correlations in activity and is often informative, but it is inherently undirected and does not establish causality. Effective connectivity, by contrast, describes directed influences—how activity in one neural population shapes activity in another.
A useful way to formalize these relationships is to represent neural circuits as graphs, with nodes denoting neural units and edges denoting connections. Correlation-based functional graphs are symmetric, whereas effective graphs are directed and explicitly encode influence and directionality.
Directed connectivity models are widely used in fMRI, but the temporal resolution of BOLD signals (on the order of seconds) limits their ability to capture fast learning-related dynamics. Invasive recordings such as calcium imaging and electrophysiology provide population-scale measurements at the timescales of neural computation, making them well suited for studying evolving connectivity. Yet despite growing interest in network dynamics, many analyses in these modalities still rely on undirected measures.
We develop computational methods to infer and analyze large-scale neuronal networks by modeling effective connectivity as a time-varying directed graph. We apply these approaches to ask how circuit interactions reorganize as new representations form. More broadly, our goal is to reveal principles that link synaptic plasticity, circuit dynamics, and behavior during learning and navigation.
Model Fusion Without Training Data: Graph-Aligned Weight Merging
In neuroscience, machine learning models are often trained in a task- and subject-specific manner, driven by limited data and substantial variability across individuals. A familiar example is brain–computer interfaces (BCIs), where models that decode motor intentions from EEG are typically trained separately for each user. While these personalized networks can perform well within their original setting, they often fail to generalize across users, sessions, or tasks. Joint training across subjects or tasks is a natural alternative and can improve generalization, but it is frequently impractical: multi-task models are harder to optimize, raw data access is often restricted by privacy and ethical constraints, and the continued growth of large datasets makes centralized training increasingly expensive.
To address these challenges, we are developing a framework for merging independently trained neural networks into a single multi-task model without access to the original training data. The approach assumes a shared architecture across models but allows each network to be optimized for a different task (or subject). We represent each trained model as a compact weight graph defined by its linear layers, then align and merge these graphs via an optimization procedure that preserves structural consistency and information flow across networks. The merged graph is subsequently translated back into a new set of network weights.
This strategy enables the fusion of specialized models into a unified network that supports multi-task functionality while preserving task-specific performance. By combining representational pathways learned independently—without sharing raw data—our approach offers a scalable way to leverage existing models and move beyond the limitations of centralized multi-task training.
Interpretable Multiscale Reconstruction of Correlation-Matrix Dynamics
This project builds on our prior work on the dynamics of symmetric positive definite (SPD) matrices in time-varying functional connectivity, where we introduced RONI - a geometric multiresolution decomposition, analogous to an analysis filter bank based on the Haar wavelet. By decomposing connectivity time series into components across multiple temporal scales, this framework enables principled identification of dominant drivers of network dynamics while explicitly respecting the intrinsic geometry of correlation matrices.
Here, we develop the complementary synthesis framework: an optimal reconstruction methodology that defines the inverse of the geometric filtering operations, thereby making the transform fully reversible. This extension moves RONI beyond one-way analysis and enables controlled signal synthesis by selectively recombining scale-specific components, supporting denoising, trajectory isolation, and hypothesis-driven manipulation of network dynamics. We further introduce new interpretability methods that connect reconstructed components to meaningful dynamical mechanisms, providing transparent, geometry-preserving explanations of how multiscale features contribute to observed connectivity changes.
Understanding how parents and adolescents influence one another is central to research on family dynamics, mental health, and development. To investigate these processes, researchers conduct longitudinal studies in which parent-child dyads complete repeated questionnaires. Traditional analysis approaches to such data typically rely on linear models and group averages. While effective in demonstrating strong effects, these methods fall short of capturing the richness and complexity of dyadic interactions and dynamics.
In this study, we propose a dynamic, data-driven framework for analyzing parent-child dyads, inspired by the two-body problem in physics. We conceptualize parents and adolescents as two interacting planetary bodies, each following its own trajectory while simultaneously exerting influence on the other. We utilize data science tools to extract the latent dynamics of each participant (parent or child) and of the dyad as a whole. Then, adapting the two-body formulation, we will consider the dyad trajectory as the trajectory of the center of mass. We will further use the estimated dyad trajectory to evaluate the "relative mass" of each individual, to identify who sets the tone in the relationship in terms of emotional state and mental health. Finally, we will characterize the dyads and tone-setters at the population level to extract indicators for positive dynamics, such as resilience and low depressive symptoms, as well as indicators for negative dynamics, including persistent conflicts and high depressive symptoms.
This research integrates psychological science and mathematical modeling to capture the complexity of parent-child interactions. Beyond advancing theory, the framework has practical implications for designing interventions that target not only individual mental health but also the relational patterns that shape it. By treating family systems as dynamic and interactive rather than static snapshots, our approach offers a novel lens on the mechanisms of influence within parent-child dyads.
Unsupervised framework for discovering functional neuronal ensembles from population dynamics
High-dimensional neural recordings provide unprecedented access to brain-wide population dynamics, yet interpreting these signals remains a major challenge. Most existing analyses rely on external information, such as known stimuli or behavioral labels, to better understand the network's dynamics. Moreover, these analyses are often applied in a univariate manner, treating neurons as independent. Biologically, meaningful neural representations typically arise from ensembles of interacting neurons rather than from individual cells, making such supervised, univariate approaches insufficient for capturing collective dynamics in an unbiased way.
Here, we propose GroupFS, a novel, data-driven method that 1) groups together co-active cells and 2) ranks the groups by their importance to the overall dynamics, without requiring supervision or external labels. GroupFS preserves the intrinsic geometry of the data by constructing two graphs, one over samples and one over features, that capture temporal and neuronal relationships. Enforcing smoothness across both graphs encourages neurons with similar activity patterns to form coherent subpopulations while suppressing noise and redundancy. The result is a compact, interpretable representation of population activity that reveals the organization of neural dynamics.
Here we apply GroupFS to whole-brain light-sheet recordings in larval zebrafish exposed to visual stimuli. Our model uncovered neuronal ensembles in the anterior hindbrain tuned to distinct stimulus conditions, regions previously identified as sensorimotor convergence areas in supervised analyses. These patterns, however, emerged here directly from the data, reflecting coordinated neuronal interactions. By revealing such structures in an entirely unsupervised manner, GroupFS enables researchers to uncover how network activity is organized internally and to relate these ensembles to behavior or sensory context, providing a powerful and interpretable tool for large-scale neural recordings.
A Multimodal, Data-Driven Analysis of Brain–Behavior Relationships Across Pregnancy in First-Time Mothers
Every year, around 140 million women experience pregnancy, a major life transition marked by extensive psychological and biological changes. While these processes are well documented, most existing research examines them in isolation, using either behavioral measures or neuroimaging data alone. As a result, the dynamic relationship between psychological trajectories and neural reorganization across pregnancy remains poorly understood.
Our work seeks to bridge this gap by leveraging a longitudinal, multimodal dataset that combines behavioral questionnaires with structural and resting-state functional MRI, collected across multiple stages of pregnancy. A central question guiding this work is whether women’s psychological health prior to pregnancy is associated with how they adapt to the transition into motherhood, and whether such differences are reflected in brain organization over time.
To examine this, we apply dimensionality reduction techniques to uncover meaningful patterns within the high-dimensional data. We fuse multiple modalities, including functional connectivity modeled as symmetric positive definite (SPD) matrices, alongside Euclidean behavioral and structural features, to reveal both shared and modality-specific latent structures. This framework supports later predictive analyses of pregnancy outcomes (e.g., maternal attachment, birth experience), ultimately advancing our understanding of the psychological-neural relationship during this critical life transition.