Current research
How does the brain make sense of the world?
We can recognise tens of thousands of objects, but despite this vast number, our recognition is remarkably quick and accurate, completed within a few hundred milliseconds.
Recognition depends on a multitude of dynamic transformations of information in the brain, from low-level visual attributes through to higher-level visual representations and semantic meaning – not simply the name of the object, but access to the relevant properties of the object and how it relates to other objects. Our ability to rapidly recognise objects in our environment is fundamental to acting appropriately in the world. The rapid extraction of semantic meaning from vision provides a platform for complex behaviours such as object identification, object use and navigational planning, and without accessing semantics, we would not be able to communicate to others about our environment.
Our research ask what are the neural dynamics and mechanisms by which vision activates semantics
How quickly do we access object semantics from vision?
How are different kinds of semantic information activated over time?
How do neural oscillations and connectivity support this?
How is superordinate (e.g. animal) and basic-level (e.g. tiger) semantic information represented differently?
Through combining MEG, EEG, fMRI and Neuropsychology, we can take a multi-modal approach to semantic representations in the brain.
Recent papers:
von Seth J, Nicholls VI, Tyler LK & Clarke A. (2023). Recurrent connectivity supports higher-level visual and semantic object representations in the brain. Communications Biology. OA version. (preprint version BioRxiv)
Clarke A. (2020). Dynamic activity patterns in the anterior temporal lobe represents object semantics. Cognitive Neuroscience, 11(3), 111-121. OA Article
How does the visual environment shape the process of recognition?
Recognising objects depends on dynamic transformations of information from vision to semantics - but in the real world, our understanding of what we see is shaped by the environment. When we see an object, we are already in a complex and rich environment and this leads to expectations about the things we are likely to see.
Our research tests how the environment changes the dynamics of visual and semantic activity in the brain, using a multimodal brain imaging framework based on fMRI, MEG, EEG and mobile EEG, with emerging methodologies including augmented reality, computational modelling, multivariate analyses, neural oscillations and brain connectivity.
Recent papers:
Nicholls VI, Krugliak A, Alsbury-Nealy B, Gramann K, and Clarke A. (2024). Congruency effects on object recognition persist when objects are placed in the wild: An AR and mobile EEG study. BioRxiv
Krugliak A, Draschkow D, Võ MLH, & Clarke A. (2023). Semantic object processing is modulated by prior scene context. Language, Cognition and Neuroscience. OA version. (preprint version BioRxiv)
Krugliak A & Clarke A. (2022). Towards real-world neuroscience using mobile EEG and augmented reality. Scientific Reports 12, Article number: 2291. OA paper