Research interests

I am broadly interested in cognitive neuroscience and visual processing.  More specifically, I have been interested in understanding the perceptual mechanisms that we use in our social interactions.  For example, when we meet a friend in the street, we recognize them by their face and their voice, and we also process a huge amount of information from their facial expressions and the tone of their voice, trying to understand how the person is feeling and how they react to what we are saying.  I am interested in the computations that we use to both recognize who someone is (person identity recognition) and to infer their emotional states (emotion recognition).  

My research has focused on the topics below.



Developmental cognitive impairments

During my PhD I focused on selective impairments in person identity recognition resulting from developmental problems. 

Prosopagnosia is a condition in which people have difficulties recognizing friends and relatives.  It can be caused by brain injury (acquired prosopagnosia), but some people without any history of neurological damage can also experience severe problems recognizing faces (developmental prosopagnosia).  During my PhD in Brad Duchaine's lab, we investigated how developmental prosopagnosics detect faces (Garrido, Duchaine, & Nakayama, 2008) and recognize emotions and mental states of others (Garrido, Furl, Draganski, et al., 2009; Duchaine, Murray, Turner, White, & Garrido, 2009). We also used a comprehensive and multi-method approach (testing the same sample of participants) to investigate the neural basis of developmental prosopagnosia (Garrido, Furl, Draganski, et al., 2009; Furl et al., 2011; Song et al., 2015; Lohse et al., 2016). We showed that individuals with developmental prosopagnosia exhibit some structural, functional, and connectivity differences (compared to controls) in brain regions that are face selective.    

During my postdoc with Ken Nakayama, we extended this research into large studies of individual differences in face recognition.  By investigating the full range of performance in a particular ability (and not only focusing on cases of extremely poor abilities), we can learn more about its associations and dissociations with other abilities, and with neural, genetic, and other biological mechanisms. 

My colleagues and I also described a case of voice recognition impairments analogous to cases of developmental prosopagnosia. KH reported difficulties in recognizing familar voices, for example, on the phone, radio, or when she couldn't see their face (Garrido, Eisner, McGettigan, et al., 2009).  KH did not have any brain injury or auditory problems that could explain these difficulties.  This was the first report of a case of developmental voice agnosia (also called phonagnosia) and this has revealed new insights into our understanding of voice recognition. Since the publication of this case, there have been other reports of cases of developmental phonagnosia (e.g. Roswandowitz et al., 2014; Xu et al., 2015).



Emotion recognition

Faces convey much of the information that we use in interacting with other people.  We recognize friends by their face and we look at someone's face to understand how they are feeling, whether they disapprove what we are saying, or whether we should trust them.  An important question in cognitive neuroscience is whether these different facial cues are processed by separate cognitive mechanisms.  My PhD work focused on whether facial identity and facial expression are processed using the same or separate mechanisms (Garrido, Furl, Draganski, et al., 2009).

I have also collaborated in several research projects investigating emotion recognition.  In a study using transcranial magnetic stimulation (TMS), we investigated location and timing of bain areas contributing to expression discrimination (Pitcher, Garrido, Walsh, & Duchaine, 2008).  My colleagues and I have also examined emotion recognition in individuals with mirror-touch synaesthesia (Banissy, Garrido, Kusnir, Duchaine, Walsh, & Ward, 2011) and social anhedonia (Germine, Garrido, Bruce, & Hooker, 2011).



Multisensory processing of faces and voices

More recently, I have been interested in multisensory integration of information from faces and voices. 

In one project, we compared the processing of emotions across modalities (Kuhn et al., 2017). We showed that the structure of representations of emotions from facial expressions and from vocal expressions was highly similar across modalities. These results suggested that there are similar or shared representations of emotions across faces and voices.

In another project, funded by the Leverhulme Trust, we are investigating how information from faces and voices is combined in the brain. I hope to report more on that soon...