Research

This line of work introduces Bayesian latent manifold tuning models based on Gaussian process latent variable model. The model generates the latent representation via a Gaussian process prior or a Normal prior, and then maps the latent to the high-dimensional neural activity via nonlinear tuning curves which are modeled with another Gaussian process prior. The proposed model is good for extracting low-D nonlinear manifolds underlying different kinds of neural data. We also learn smooth tuning curves over the latent embedding space that accurately characterize individual neuron’s response as a function of a particular experimental variable. The estimated latent representations provide novel scientific insights into neural dynamics and assist neuroscientists in comprehending complex neural computation.

Neural Dynamics Discovery via Gaussian Process Recurrent Neural Networks . Roger She, Anqi Wu. UAI2019 (oral presentation: 6.8%) [paper][code][talk]

Learning a latent manifold of odor representations from neural responses in piriform cortex. Anqi Wu, Stan Pashkovski, Bob Datta, and Jonathan W Pillow. NeurIPS 2018 (acceptance rate: 20.67%) [paper]

Gaussian process based nonlinear latent structure discovery in multivariate spike train data. Anqi Wu, Nicholas Roy, Stephen Keeley, and Jonathan W Pillow. NeurIPS 2017 (acceptance rate: 20.93%) [paper]

This work introduces a novel pose estimation algorithm for animal behavior analysis. We build a probabilistic graphical model built on top of deep neural networks to leverage skeletal constraints and temporal continuity in animal videos, and developed an efficient structured variational approach to perform inference. The resulting model exploited both labeled and unlabeled frames to achieve significantly more accurate and robust tracking over different species of animals in various tasks. We also apply the estimated traces to discover behavioral syllables via time-series segmentation as well as achieving estimation of interpretable “disentangled” low-dimensional representations of the full behavioral video.

Deep Graph Pose: a semi-supervised deep graphical model for improved animal pose tracking. Anqi Wu*, E. Kelly Buchanan*, Matthew Whiteway, Michael Schartner, Guido Meijer, Jean-Paul Noel, Erica Rodriguez, Claire Everett, Amy Norovich, Evan Schaffer, Neeli Mishra, C. Daniel Salzman, Dora Angelaki, Andrés Bendesky, The International Brain Laboratory, John Cunningham, and Liam Paninski. (* equal contribution). NeurIPS 2020 (acceptance rate: 20%) [paper][bioRxiv][code]

This work provides two innovations that aim to turn variational Bayes into a robust inference tool for Bayesian neural networks: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel empirical Bayes procedure for automatically selecting prior variances. Combining these two innovations, the resulting method is highly efficient and robust. On the application of heteroscedastic regression we demonstrate good predictive performance over alternative approaches.

Deterministic variational inference for robust Bayesian neural networks. Anqi Wu, Sebastian Nowozin, Edward Meeds, Richard E. Turner, José Miguel Hernández-Lobato, and Alexander L. Gaunt. ICLR 2019 (oral presentation: 1.5%) [paper][code-tensorflow][code-tensor2tensor][talk]

This work introduces a hierarchical Gaussian process based model for smooth, region-sparse weight vectors and tensors in a linear regression setting. We show substantial improvements over comparable methods for brain imaging datasets and discover interpretable regional activation of the brain during different tasks.

Dependent relevance determination for smooth and structured sparse regression. Anqi Wu, Oluwasanmi Koyejo, and Jonathan W. Pillow. Journal of Machine Learning Research (JMLR), [JMLR][arXiv][code]

Incorporating structured assumptions with probabilistic graphical models in fMRI data analysis. MingBo Cai, Michael Shvartsman, Anqi Wu, Hejia Zhang, and Xia Zhu (all authors contributed equally). Neuropsychologia [arXiv][web link]

Sparse bayesian structure learning with dependent relevance determination priors. Anqi Wu, Mijung Park, Oluwasanmi O. Koyejo, and Jonathan W. Pillow. NeurIPS 2014 (acceptance rate: 24.67%)[paper]

This work introduces the brain kernel, a continuous covariance function for whole-brain activity patterns. We estimate the brain kernel using resting-state fMRI data, and we develop an exact, scalable inference method based on block coordinate descent to overcome the challenges of high dimensionality (10-100K voxels). Finally, we illustrate the brain kernel's usefulness with applications to brain decoding and factor analysis with four different task-based fMRI datasets.

Brain Kernel: a covariance function for fMRI data using a large-scale Gaussian process latent variable model. Anqi Wu, Barbara Engelhardt, and Jonathan W. Pillow. BNP 2017.

Brain kernel: a new spatial covariance function for fMRI data. Anqi Wu, Samuel A. Nastase, Christopher A. Baldassano, Nicholas B. Turk-Browne, Kenneth A. Norman, Barbara E. Engelhardt, Jonathan W. Pillow, NeuroImage. [web link][bioRxiv][code]


This work provides a theoretical connection between spike-triggered covariance analysis and nonlinear subunit models, by showing that a “convolutional” decomposition of a spike-triggered average and covariance matrix provides an asymptotically efficient estimator for class of quadratic subunit models. We establish theoretical conditions for identifiability of the subunit filter and pooling weights, and show that the moment-based estimator performs well even when the assumptions about model specification are violated. Finally, we analyze neural data from the primary visual cortex in macaque and show that this moment-based estimator outperforms a highly regularized generalized quadratic model, and achieves nearly the same prediction performance as the full maximum-likelihood estimator, yet at substantially lower cost.

Convolutional spike-triggered covariance analysis for neural subunit models. Anqi Wu, Il Memming Park, and Jonathan W. Pillow. NeurIPS 2015 (acceptance rate: 21.93%) [paper][code]

This work extends the standard Bayesian optimization methods to exploit the first and second order derivative information from the unknown function. We perform sampling-based inference in order to incorporate uncertainty over hyperparameters, and show that both hyperparameters and function uncertainty decrease much more rapidly when using derivatives.

Exploiting gradients and Hessians in Bayesian optimization and Bayesian quadrature. Anqi Wu, Mikio C. Aoi, and Jonathan W. Pillow. arXiv 1704.00060 [arXiv]