This site will be moving soon.

Upcoming & recent events

Date(s) Event 
 April 17 NYU Center for Mind, Brain and Consciousness Debate:
Is There Unconscious Perception?
with Ned Block, Ian Phillips, Hakwan Lau, & Marisa Carrasco

video here!
 April 19 - May 2Research with Mitsuo Kawato and Kazuhisa Shibata with the Decoded Neurofeedback group
Advanced Telecommunications Research Institute International, Kyoto, Japan
May 18 - 24 Vision Sciences Society 2017 annual meeting
1. Symposium Chair & Speaker:
Transcranial magnetic stimulation to visual cortex induces suboptimal introspection, in the
"How can you be so sure? Behavioral, computational, and neuroscientific perspectives on metacognition in perceptual decision-making" Symposium
2. Talk: Human intracranial electrophysiology suggest suboptimal calculations underlie perceptual confidence
St. Pete's Beach, Florida
 June 25-29Organization for Human Brain Mapping 2017 annual meeting
Talk & Poster: Human ECoG reveals dissociable calculations for perceptual decisions and confidence judgments 
Vancouver, BC

I'm an Assistant Professor in Bioengineering at University of California, Riverside and a Visiting Researcher at the Advanced Telecommunications Research Institute in Nara, Japan.

Previously, I was a postdoc at UCLA in the Consciousness & Metacognition Lab working with Hakwan Lau.  I received my PhD in 2014, having worked in the Visual and Multisensory Perception Lab at UCLA with Ladan Shams.  

My research aims to understand the computational and neural mechanisms of conscious perception and sensory metacognition: our subjective sense of awareness and confidence in our perceptual decisions.  I use psychophysics, computational modeling, human neuroimaging (fMRI and ECoG), and non-invasive neural stimulation techniques to look at how our brains assess the reliability of incoming sensory signals, and translate that assessment into a subjective sense of confidence and a conscious percept.  

Neural correlates of confidence and metacognition

I work to identify and manipulate neural representations of confidence and awareness in perception, relying primarily on a combination of functional neuroimaging (fMRI; ECoG), machine learning (SVM; sparse logistic regression; MVPA) and psychophysics.  Coupled with computational modeling and decoded neurofeedback (DecNef), my research aims to clarify the how the brain computes confidence in a perceptual experience, and what causes a low-level representation to rise into awareness.  DecNef is cool because you can get people to manipulate their own brain activity totally unconsciously, and thereby make them to change their confidence ratings and perhaps even their awareness of an external stimulus.  We're working also on using this combined with probabilistic and implementation-level models relying on tuned normalization to investigate how confidence may be neurally computed.  I've just spent some time in Japan this spring at ATR to use DecNef to begin a project investigating how a low-level representation becomes available to higher-level processes like metacognition and awareness.

Hakwan and I have just published a paper on ECoG using machine learning techniques and computational modeling to look at the spatial and temporal dissociations between computations underlying perceptual decisions and confidence judgments.  Seems that people aren't optimal at judging confidence, despite the "popular" opinion in the field.  I talked about this project at SfN in November, and will also be sharing it with the OHBM community in June. This paper is now in press at Nature Human Behaviour (link to follow soon).
Probabilistic, biologically plausible computational models

Hakwan and I published a paper last year using Bayesian ideal observer analysis to show that blindsight can't be induced in normal human observers, at least not with the techniques everybody has been relying on for the past several decades.  Last June I got to argue about this topic with Ned Block, Bob Kentridge, and Ian Phillips in a Symposium at ASSC in Buenos Aires. We just had a redux of this symposium at the NYU Philosophy department in April, also with Marisa Carrasco and Hakwan, hosted by Dave Chalmers and Ned Block.  Video is here!

Right now, I'm combining Bayesian modeling with a leaky competing accumulator framework to understand how neural architecture could underlie some strange dissociations between decisions and confidence judgments that we see in the literature.  You'd think that confidence always represents the "probability of being correct" if a system is optimal... but it seems that maybe it represents something else too, related to the magnitude of evidence contributing to the decision you just made, not just how likely you are to be right.  I talked about this at SfN in November, too, as part of the Mini-Symposium I chaired

I'm also working with Michele Basso to use single- and multi-unit recordings to validate this model, using a multi-channel v-probe to record from Rhesus macaque superior colliculus.  I also presented a preliminary version of this work at the COSYNE workshops last winter.

Finally, I'm working with Alicia Izquierdo to look at how confidence is computed in rodents.  So far, there are striking parallels between the rodents' behavior and that of human observers.  We are currently using various protein staining techniques to examine how concentrations of GABA (related to the computational model, above) may relate to metacognitive sensitivity and reinforcement learning rates.
External modulation of human cortex 

tDCS applies a weak electrical current to target areas of the brain, while TMS uses powerful electromagnets to manipulate neural activity.  These techniques have been used to study perception, decision making, and problem solving, and have clinical applications as well.  My work looks at how these techniques can modulate conscious perception and metacognition by understanding the neural consequences of stimulation through computational modeling.  

We've recently collaborated with Tony Ro to use TMS to look for blindsight in normal observers.  We found that TMS to visual cortex doesn't actually make a target invisible (as lots of people may think), but it does induce suboptimal introspection that might be akin to real, neurological cases of blindsight.  I presented this project in a Symposium I chaired at VSS this spring, and the paper is now published in Cortex.

We also wrote a paper last year with Tony talking about the role of response bias and good experimental hygiene in looking for something as elusive as unconscious perception.