Note: Recently I started a new position at Sandia National Laboratories in Albuquerque, New Mexico. Previously, I was a post-doctoral researcher in the laboratory of Fred Gage at the Salk Institute for Biological Studies in La Jolla, CA. Prior,
I received my PhD from the Computational Neuroscience program at UCSD, a specialization within the UCSD Department of Neuroscience, and my research focuses on the development and utilization of computational tools to understand the plasticity of neural networks in the adult brain. This site mostly summarizes my postdoctoral and graduate research, though these are still ongoing interests of mine.
My Sandia website can be found at here
My PhD thesis work was published in Neuron, and was the subject of several online news reports:
CNN.com news report
Newsweek.com blog article
For details regarding adult neurogenesis, see the Scholarpedia article on neurogenesis, which I helped author. We also have recently published review articles in Nature Reviews Neuroscience, Trends in Cognitive Science, and Hippocampus.
My research has focused on several areas:
1. Neural network modeling of adult neurogenesis and the dentate gyrus
New neurons continuously integrate into the adult brain in a region known as the dentate gyrus (DG), which is thought to be critically important in the formation of episodic memories. Potentially thousands of these neurons integrate into this region on a daily basis, suggesting that they may have a significant influence on our abilities to form new memories. Recently, I developed a computational model that helped generate three distinct predictions for what these new neurons contribute to DG function. I am currently working with neurobiologists specializing in animal behavioral tasks and electrophysiology in the development of studies that test these predictions specifically. In addition, I am continuing to investigate neurogenesis from a modeling perspective in order to hopefully reveal other functionally relevant features of this unique process.
Why is a computational understanding of neurogenesis important?
2. Effect of network scaling on the validity of computational models
Models in computational neuroscience, as in most engineering and science fields, typically simplify the system being studied by reducing complexity. This reductionism includes both simplification based on details (analysts don't model every single stock transaction made to understand the stock market) as well as scale (engineers don't start modeling a bridge or airplane by building a full-sized model). Likewise, in neuroscience, modelers typically look at neural systems by making simplifying assumptions about the details (not all neuron models are channel based, nor do they need to be) and by reducing scale. Regarding this latter point, we are interested in specifically understanding whether building a model with substantially fewer neurons is like understanding a small-scale replica of an airplane, or if it is like trying to understand the whole stock market by looking at a single company.
By using cloud computing, via the Amazon Web Services platform, we are directly investigating whether a complex neural network models behave fundamentally differently if they are real-sized (i.e., mouse sized) or reduced-sized (typical models are at least 100x smaller).
3. Brain inspired Computing
The goal of developing new computers based on the brain's dynamics and architecture has long been a focus of computer scientists and neuroscientists, but clear demonstrations of this approach's value have been few and far between. This is increasingly a focus of my research, and I have helped in the development of an exciting workshop in this area attended by academic, industry, and government researchers: http://nice.sandia.gov