welcome to the birkbeck Auditory Neuroscience Lab

We are a part of the Auditory Language Processing, Hearing and Attention (ALPHALAB) Lab at the Department of Psychological Sciences, Birkbeck. Our lab uses behavioural techniques and EEG to study the perception of speech categories, individual differences in language learning and attention to sound and sound features.

News & UPdates

New paper! ⏰

6 March 2024

Our paper investigating the importance of various acoustic cues in categorization of speech vs song finally published!

Citation: Kachlicka, M., Patel, A. D., Liu, F. & Tierney, A. T. (2024). Weighting of cues to categorization of sound versus speech in tone-language and non-tone-langauge speakers. Cognition, 245, 105757. https://doi.org/10.1016/j.cognition.2024.105757 

New preprint! 📣

22 February 2023

Which statistics does the brain track while monitoring the rapid progression of sensory information?

Across several experiments, participants passively listened to tone sequences containing transitions between a range of stochastic patterns but lacking transient deviants - distributional changes could only be inferred by tracking the unfolding sequence statistics. Predictions were guided by a Bayesian predictive inference model that monitored hidden changes in the sequences’ statistical structure for change probability (likelihood of statistical change) and precision (inferred reliability of past inputs). Results show that listeners automatically track the statistics of unfolding sounds even when these are irrelevant to behaviour. Transitions between sequence patterns drove an increase of the sustained EEG response. This was observed to a range of distributional statistics, and even in situations where behavioural detection of these transitions was at floor. These observations suggest that the modulation of the EEG sustained response reflects a universal process of belief updating within the brain. By linking computational modelling with the brain responses, authors demonstrate that the dynamics of these transition-related responses align with the tracking of ‘precision’ – the confidence or reliability assigned to a predicted sensory signal.

Citation: Zhao, S., Skerritt-Davis, B., Elhilali, M., Dick, F., & Chait, M. (2024). Sustained EEG responses to rapidly unfolding stochastic sounds reflect precision tracking. https://doi.org/10.1101/2024.01.08.574691

New paper! ⏰

9 February 2024

Auditory processing as a multidimensional phenomenon encompassing auditory acuity, attention to acoustic details, and auditory-motor integration


The interaction view posits that auditory processing encompasses not only the perception of acoustic details (acuity), but also the direction of attention towards specific acoustical details and the transformation of auditory information into motor output (integration). To test this hypothesis, we examined whether a model incorporating all the components (acuity, attention, and integration) could account for additional variance in phonological and morphosyntactic aspects of language learning outcomes among Chinese learners of English. Results show that the tests scores tapped into distinct components of auditory processing (acuity, attention, and integration), and played an equal role in explaining various aspects of L2 learning, even after controlling for biographical background and working memory. Our findings align with the growing evidence suggesting that auditory processing, which includes perceptual, cognitive, and motoric components, plays a pivotal role in language learning throughout the lifespan. Read the paper for more details!

Citation: Saito, K., Kachlicka, M., Suzukida, Y., Mora-Plaza, I., Ruan, Y. & Tierney, A. T. (2024). Auditory processing as perceptual, cognitive, and motoric abilities underlying successful second language acquisition: Interaction model. Journal of Experimental Psychology: Human Perception and Performance, 50(1), 119-138. https://doi.org/10.1037/xhp0001166 

New paper! ⏰

31 January 2024

Is listening performance bound to the specific characteristics of musical instrument training or other types of auditory expertise?


This paper introduces a new population of auditory experts and compares their performance on a range of listening tasks with that of musicians and controls. Results show that audio engineers and performing musicians had lower psychophysical thresholds than controls, particularly in pitch perception. Audio engineers could also better memorise and recall non-musical auditory scenes, whereas musicians performed best in a sustained selective attention task. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than the other two groups, but an online follow-up study failed to replicate this musician advantage. Overall, this study shows that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated solely with musical instrument training.

Citation: Caprini, F., Zhao, S., Chait., M., Angus, T., Pomper, U., Dick, F., & Tierney, A. T. (2024). Generalization of auditory expertise in audio engineers and instrumental musicians. Cognition, 105696. https://doi.org/10.1016/j.cognition.2023.105696

New preprint! 📣

16 January 2023

Can experience with the statistical regularities of speech and music increase the salience of informative dimensions within a given domain?

Across three experiments, listeners heard speech and tone streams varying in pitch and duration at fixed rhythms and either selectively attended to variations in pitch or duration or listened to these sequences without directing attention to either dimension. Listeners showed better dimension-selective attention to pitch and enhanced pitch tracking for tone compared to speech sequences. The opposite pattern was observed for the temporal dimension - better attention and enhanced duration tracking for speech. Individual differences in attention to pitch vs duration were stable across domains for stimuli with acoustically-matched stimuli. The same was true for the relative salience of pitch vs duration, but only for participants without percussion training. So the short answer is - yes! These findings suggest that long-term prior experience with the statistical regularities within a domain can enhance the salience of informative acoustic dimensions. Check out the preprint for more details!

Citation: Symons, A. E., Kachlicka, M., Wright, E., Razin, R., Dick, F., & Tierney, A. T. (2024). Dimensional salience varies across verbal and nonverbal domains. https://doi.org/10.31234/osf.io/d4u93 

Dr Ashley Symons awarded funding from Sempre 💰

11 January 2024


Ashley was awarded the Arnold Bentley New Initiatives Fund by the Society for Education, Music and Psychology Research (Sempre) for a project investigating musical cue weighting in adolescence. Congratulations!

New paper! ⏰

23 December 2023

In their newest paper, Katya Petrova, Kyle Jasmin, Kazuya Saito and Adam Tierney examined how length of residence in a L2 environment modifies perceptual strategies for suprasegmental categorization. 


They showed that while categorizing phrase boundaries, Mandarin Chinese learners of English who lived in the UK for more than 3 years weighted duration more highly than Mandarin speakers with less than 1 year of experience living abroad. However, both groups of learners continued to weight duration less highly and pitch more highly during musical beat categorization and struggled to ignore pitch and selectively attend to amplitude in speech, relative to native English speakers. These results suggest that adult L2 experience can retune perceptual strategies in specific contexts, but global acoustic salience might be more resistant to change.

Citation: Petrova, K., Jasmin, K., Saito, K., & Tierney, A. T. (2023). Extensive residence in a second language environment modifies perceptual strategies for suprasegmental categorization. Journal of Experimental Psychology: Learning, Memory and Cognition, 49(12), 1943-1955. https://doi.org/10.1037/xlm0001246 

Posters from the SNL2023 meeting

4 December 2023

Did you miss our poster presentations at this year's Society for the Neurobiology of Language Meeting in Marseille? Here they are!

"Dimensional modulation in continuous speech captures attention" presented by Dr Ashley Symons

She found that salient changes in roughness, loudness, and pitch led to a transient increase in tapping speed between 250-750 ms after distracting sound changes. Individual differences in the magnitude of the tapping shift were also highly reliable across participants. A similar effect occurred when continuous speech was presented in the background - listeners’ tapping speeded up 500-1000 ms following salient amplitude, pitch, and spectral changes. These findings show that changes along acoustic dimensions in speech can capture attention and disrupt ongoing goal-directed behaviour. One potential mechanism of this could be the arousal driven by activity in the locus coeruleus, leading to an expansion of time perception.

Citation: Symons, A. E., Dick, F., & Tierney, A. T. (2023). Dimensional modulation in continuous speech captures attention. http://dx.doi.org/10.13140/RG.2.2.14173.38883 

"Effects of first language background and musical experience on cue weighting, attention and dimensional salience in speech and music" presented by Magdalena Kachlicka

Mandarin speakers, compared to native English speakers, showed enhanced attention to and preferential use of pitch across behavioural tasks - they gave more weight to pitch during prosody and musical beats categorisation and demonstrated superior attention to pitch. She found no effect of L1 background on neural entrainment to acoustic dimensions, but the FFR to stimulus pitch was enhanced in Mandarin speakers, suggesting that speaking a tone language can boost processing of early pitch encoding, without affecting pitch salience. Comparison of cue weighting strategies between musicians and non-musicians revealed that musical training sharpens tuning to the dimension most relevant to a given categorization task. These results are consistent with attention-to-dimension theories of cue weighting claiming that listeners redirect their attention toward the most informative or task-relevant cues.

Citation: Kachlicka, M., Symons, A. E., Saito, K., Dick, F., & Tierney, A. T. (2023). Effects of language background and musical experience on cue weighting, attention and dimensional salience in speech and music. http://dx.doi.org/10.13140/RG.2.2.28291.48165 

ICN Seminar Series with Dr Kyle Jasmin - Monday 27/11

24 November 2023

Kyle, a former lab member and collaborator, will talk about his research into understanding neurocognitive and perceptual diversity in communication. You can find out more about his recent work here: https://www.kylejasm.in/ 

Date & time: 27 November (Monday), 3:15 - 4:15 pm (UK)

Location: ICN UCL, Alexandra House, RMB10

For more details on the seminar series see: https://www.ucl.ac.uk/icn/seminars 

Speech Science Forum with Prof Adam Tierney - Thursday 16/11

15 November 2023

Adam will talk about how perception of ambiguous sounds reveals the diversity of human perception

Abstract: One assumption driving research on human perception is that different people solve the underlying computational problems in the same way. According to this assumption, the world supplies a set of constraints and statistical regularities to which humans adapt, and although some people might be better than others at detecting these patterns, the optimal strategy does not differ across individuals. This assumption, however, may not always hold. For example, for a listener who has difficulty processing a particular dimension the optimal strategy may be to direct attention away from it and towards other stimulus features. One way to test this assumption is to present people with ambiguous stimuli that can be categorized in multiple ways to see whether individuals differ in how they are perceived. In this talk I will present evidence suggesting that categorization of ambiguous sounds varies across individuals, and that these different strategies reflect an individual’s perceptual strengths and weaknesses and their history of exposure to and use of sound.

Date & time: 16 November (Thursday), 4:00 - 5:00 pm (UK)

For more details on how to join see: https://www.ucl.ac.uk/pals/events/2023/nov/speech-science-forum-adam-tierney 

New preprint! 📣

13 November 2023

Have you ever wondered about what acoustic cues people use while distinguishing between speech and song? Are there any cultural differences in how we make these decisions? Check our new preprint for some answers!

We find that listeners from different backgrounds agree on which phrases sound like song after repetition and that the strength of this effect does not depend on people's country of residence or language background. We also show that people rely on similar cues to distinguish between speech and song - all listeners used small pitch intervals, within-syllable pitch contours, steady beats, and fit to musical key. However, only tonal language speakers used pairwise syllabic durational variability as a cue, where more variable speech was perceived as more song-like. Overall, our findings seem to support the idea that how we perceive music is influenced by both our biological predispositions and our cultural background.

Citation: Kachlicka, M., Patel, A. D., Liu, F., & Tierney, A. (2023). Weighting of cues to categorization of song versus speech in tone language and non-tone-language speakers. https://doi.org/10.31234/osf.io/dwfsz

Introducing SLA Speech Tools!

03 March 2023

We are excited to introduce the SLA Speech Tools website! Members of the Auditory Neuroscience Lab have created a new resource for second language acquisition researchers and educators. Here researchers will find a wide range of validated measures for assessing individual differences in second language learning. Educators will find the latest research-based pronunciation teaching materials that are ready to be used in the classroom. Visit the website to find out more! 

Website: http://sla-speech-tools.com/ 

Citation: Mora-Plaza, I., Saito, K., Suzukida, Y., Dewaele, J-M., & Tierney, A. (2022). Tools for second language speech research and teaching. http://sla-speech-tools.com. http://doi.org/10.17616/R31NJNAX