During conversation, we take turns at listening to our partner and producing a response. Research suggests that we take turns with very little gap (around 200 milliseconds) or overlap between our sentences. But laboratory studies suggest that speaking in isolation is comparatively slow - it takes us at least 600 milliseconds to produce the name of a picture. In this work, I ask how people coordinate their turns during conversation, focusing on how and when they decide (1) what they want to say; and (2) when they want to speak.
Some of this work has involved collaborations with Dr. Chiara Gambi (University of Warwick), Prof. Antje Meyer (Max Planck Institute for Psycholinguistics), and Prof. Martin Pickering (University of Edinburgh).
One mechanism that likely supports coordination during conversation is linguistic prediction. For example, if you hear a sentence like "The boy will go outside and fly the..." then you would probably guess that the next likely word will be kite and not airplane. But how do you make these predictions? In this work, I ask (1) what mechanisms support such prediction; and (2) what information is used to predict.
Some of this work has involved collaborations with Dr. Lauren Hadley (University of Nottingham), Dr. Naomi Nota (University of Edinburgh), and Prof. Martin Pickering (University of Edinburgh).
Although conversation may seem effortless for some, people with hearing loss typically struggle - they tend to avoid or disengage from conversation altogether, which can lead to increased social isolation and physical health problems. The most obvious difficulty people with hearing loss experience is being unable to hear what was said. But this difficulty leads to a range of cognitive problems that affect conversation. In this work, I investigate topic maintenance in conversation in people with hearing loss, particularly focusing on the relationship between what they said and what their partner has previously said. I focus on the cognitive mechanisms that may support topic maintenance.
This work involves collaborations with Prof. Danielle Matthews (University of Sheffield).
My work has focused on conversation between humans. But I have recently become interested in how we communicate with our pets. Although there is a lot of research looking at how we talk to our pets (using so called child (or pet) directed speech), very little has looked at what we actually say to our pets. In this work, I ask whether people perceive their pets as communicative entities, much like they do with other humans. I also ask whether the quality of people's communication with their pets affects their relationship with them.
This work involves collaborations with Prof. Steve Loughnan (University of Edinburgh).
If you are interested in hearing more about discussing any of these projects, please reach out on r.corps@sheffield.ac.uk.