Aoi et al. (2020). Prefrontal cortex exhibits multidimensional dynamic encoding during decision-making. Nature Neuroscience.
Bergomi et al. (2019). Towards a topological–geometrical theory of group equivariant non-expansive operators for data analysis and machine learning. Nature Machine Intelligence.
Li et al. (2019). Dendritic computations captured by an effective point neuron model. PNAS 116(30), 15244–15252.
Frankle & Carbin (2019). The lottery ticket hypothesis: finding sparse, trainable neural networks. arXiv:1803.03635 & ICML.
Zhou et al. (2019). Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach. arXiv:1804.05862 & ICLR.
Dold et al. (2018). Stochasticity from function - why the Bayesian brain may need no noise. arXiv:1809.08045.
Tang et al. (2018). Recurrent computations for visual pattern completion. PNAS, 1-6.
Joglekar et al. (2018). Inter-areal balanced amplification enhances signal propagation in a large-scale circuit model of the primate cortex. Neuron 98(1), 222-234.
Rubin, Abbott & Sompolinsky (2017). Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity. PNAS 114(44), E9366-E9375.
Scholz et al. (2017). Stochastic feeding dynamics arise from the need for information and energy. PNAS 114(35), 9261-9266.
Arpit et al. (2017). A closer look at memorization in deep networks. arXiv:1706.05394.
Change et al. (2016). A compositional object-based approach to learning physical dynamics. arXiv:1612.00341.
Funamizu, Kuhn & Doya (2016). Neural substrate of dynamic Bayesian inference in the cerebral cortex. Nature Neuroscience 19, 1682-1689.
Thalmeier et al. (2016). Learning universal computations with spikes. PLoS Computational Biology 12(6): e1004895.
Aitchison, Corradii & Latham (2016). Zipf's law arises naturally when there are underlying unobserved variables. PLoS Computational Biology 12(12): e1005110.
Stringer et al. (2016). Inhibitory control of correlated intrinsic variability in cortical networks. eLife 2016;e19695.
Prentice et al. (2016). Error-robust modes of the retinal population code. PLoS Computational Biology 12(11): e1005148.
Wystrach, Mangan & Webb (2015). Optimal cue integration in ants. Proceedings of the Royal Society B. 282(1816).
Rezende & Gerstner (2014). Stochastic variational learning in recurrent spiking networks. Front. Comput. Neurosci. 8:38.
Latimer, Rieke & Pillow (2018). Inferring synaptic inputs from spikes with a conductance-based neural encoding model. bioRXiv.
Pica et al. (2017). Quantifying how much sensory information in a neural code is relevant for behavior. NIPS.
Gao et al. (2017). A theory of multineural dimensionality, dynamics and measurement. bioRXiv.
Williams et al. (2017). Unsupervised discovery of demixed, low-dimensional neural dynamics across multiple timescales through tensor components analysis. bioRXiv.
Chen, Beck & Pearson (2017). Neuron's eye view: inferring features of complex stimuli from neural responses. PLoS Computational Biology 13(8).
Huh & Sejnowski (2017). Gradient descent for spiking neural networks. arXiv:1706.04698
Stockes & Purdon (2017). A study of problems encountered in Granger causality analysis from a neuroscience perspective. PNAS 114(34), E7063-E7072.
Tensor decomposition: Seely et al. (2016). Tensor analysis reveals distinct population structure that parallels the different computational roles of areas M1 and V1. PLoS Computational Biology 12(11): e1005164; Onken et al. (2016). Using matrix and tensor factorizations for the single-trial analysis of population spike trains. PLoS Computational Biology 12(11): e1005189.