I focus on modeling latent interactions in structured and relational data that can be represented in an item-response form, such as topic-word matrices, review data, or outputs from deep learning models. Using Bayesian latent space models, I explore how items (e.g., words, questions) and respondents (e.g., topics, individuals) relate to each other through underlying low-dimensional structures. This approach enables interpretable inference on hidden relationships without relying on heuristic similarity metrics and has been applied to problems in text analysis, longitudinal surveys, and dynamic relational systems.
Journal of Computational and Graphical Statistics, 31(2), 360-377.
We develop a Bayesian method to study how items interact over time in longitudinal survey data. The model captures complex, changing relationships without needing to compute tricky normalizing constants, thanks to a combination of shrinkage priors and an efficient MCMC algorithm. This helps identify meaningful item interactions in a flexible and scalable way.
Journal of Applied Statistics, 1-5.
We explore how topics in real-world text data relate to each other by treating topic-word distributions like item-response data. Using a Bayesian latent space model, we map topics and words into a shared space to reveal their underlying connections. This gives us a clearer picture of how topics are structured—applied here to early COVID-19 literature from PubMed.
Frontiers in Neuroscience, 1-5.
We propose an interpretable framework to compare brain connectivity patterns across groups using resting-state fMRI data. By combining functional connectivity networks, self-attention deep learning, and a latent space model, we highlight key regions of interest (ROIs) that differentiate cognitive impairment types. This approach helps uncover meaningful and group-specific brain activity patterns from complex, high-dimensional data.